GPT-4 : a shift from ‘what it can do’ to ‘what it augurs

Context:  A U.S. company, OpenAI, has once again sent shock waves around the world, this time with GPT-4, its latest AI model. This large language model can understand and produce language that is creative and meaningful, and will power an advanced version of the company’s sensational chatbot, ChatGPT.

GPT-4 and what it can do

GPT-4 is a remarkable improvement over its predecessor, GPT-3.5, which first powered ChatGPT. 

  • Take large prompts: While GPT-3.5 could not deal with large prompts well, GPT-4 can take into context up to 25,000 words, an improvement of more than 8x.
  • More creative: Its biggest innovation is that it can accept text and image input simultaneously, and consider both while drafting a reply. For example, if given an image of ingredients and asked the question, “What can we make from these?”GPT-4 gives a list of dish suggestions and recipes. 
  • Performs well in tests designed for humans:  For instance, in a simulated bar examination, it had the 90th percentile, whereas its predecessor scored in the bottom 10%. GPT-4 also sailed through advanced courses in environmental science, statistics, art history, biology, and economics.
    • However, GPT-4 failed to do well in advanced English language and literature, scoring 40% in both. Nevertheless, its performance in language comprehension surpasses other high-performing language models, in English and 25 other languages, including Punjabi, Marathi, Bengali, Urdu and Telugu. 
  • Understand human emotions: The model can purportedly understand human emotions, such as humorous pictures. 
  • White collar jobs: OpenAI has released preliminary data to show that GPT-4 can do a lot of white-collar work, especially programming and writing jobs. 

If we define intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience”, GPT-4 already succeeds at four out of these seven criteria. It is yet to master planning and learning.

Ethical questions

  • Threat to the examination systems: ChatGPT-generated text infiltrated school essays and college assignments almost instantly after its release; its prowess now threatens examination systems as well.
  • Integrity of data is not ensured: Its output may not always be factually correct — a trait OpenAI has called “hallucination”. While much better at cognising facts than GPT-3.5, it may still introduce fictitious information subtly. 
  • Lack of transparency: OpenAI has not been transparent about the inner workings of GPT-4. OpenAI gives competitive landscape and the safety implications as reasons for this. While secrecy for safety sounds a plausible reason, OpenAI is able to subvert critical scrutiny of its model, which is important to instill confidence in AI generated information. 
  • Biases and stereotypes: GPT-4 has been trained on data scraped from the Internet that contains several harmful biases and stereotypes. There is also an assumption that a large dataset is also a diverse dataset and faithfully representative of the world at large. However, this is not the case for the Internet. On internet, huge dataset can be biased and incorrect.  
  • OpenAI’s policy to fix these biases thus far has been to create another model to moderate the responses, since it finds curating the training set to be infeasible. Potential holes in this approach include the possibility that the moderator model is trained to detect only the biases we are aware of, and mostly in the English language. This model may be ignorant of stereotypes prevalent in non-western cultures, such as those rooted in caste.
  • Possible propaganda and disinformation engine: Just asking GPT-4 to pretend to be “AntiGPT” causes it to ignore its moderation rules, as shown by its makers, thus jailbreaking it. As such, there is vast potential for GPT-4 to be misused as a propaganda and disinformation engine.

Way forward

  • Responsible AI Development: Developers and researchers need to prioritize responsible AI development by considering the potential social and ethical implications of their work. This includes incorporating diverse perspectives in the development process, conducting rigorous testing, and addressing potential biases in the training data.
  • Transparency and Explainability: It is important for AI models to be transparent and explainable to users and stakeholders. This means providing clear documentation and explanations of how the model works and making it easier to interpret the outputs of the model. This can help build trust in the technology and enable users to understand and address any negative impacts.
  • Model Auditing: It is important to regularly audit AI models to identify and address potential biases and negative impacts. 
  • Data Governance: To mitigate the negative impact of generative AIs, we need better data governance practices. This means establishing clear guidelines for how data is collected, stored, and used, and ensuring that data is representative and unbiased.
  • Ethical Guidelines: guidelines for data privacy and security, transparency and explainability, and fairness and accountability.
  • Liability Frameworks: Liability frameworks can help ensure that those responsible for developing and deploying generative AIs are held accountable for any negative impacts they cause. This includes establishing clear liability standards and implementing mechanisms for compensating those who are harmed by generative AIs.
  • Proactive policy making: OpenAI has released preliminary data to show that GPT-4 can do a lot of white-collar work, especially programming and writing jobs, while leaving manufacturing or scientific jobs relatively untouched. Wider use of language models will have further effects on economies. This requires proactive and futuristic policy making. 
  • Interdisciplinary Research: Addressing the negative impact of generative AIs requires interdisciplinary research that brings together experts in fields such as computer science, ethics, law, and sociology. This can help identify and address potential negative impacts from a variety of perspectives and ensure that solutions are holistic and effective.
  • Education and Awareness: It is important to educate the public and raise awareness about the potential negative impacts of AI technologies. This can empower individuals and communities to make informed decisions about its use.
  • User Feedback and Control: Users should be able to provide feedback on the output of generative AIs and have control over how their data is used. 

Mains practice question

Q. What potential advancements and improvements in language modelling and natural language processing are expected with the introduction of GPT-4? How might these advancements impact various industries and applications that rely on these technologies?

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 20 MB. You can upload: image, document, archive, other. Drop files here

Online Counselling
Table of Contents