advertisement
ChatGPT4 Will Be Multimodal
Microsoft Germany CTO, Andreas Braun, revealed that GPT-4, which will come out this week would be multimodal.
Speaking at an event in Germany Braun said, “We will introduce GPT-4 next week, there we will have multimodal models that will offer completely different possibilities – for example, videos.”
Unlike GPT-3.5, GPT-4 will be a multi-modal LLM, meaning it can draw information from various sources such as videos and pictures from the Internet. GPT-3.5, OpenAI’s current LLM, is purely text-based. It is trained on a massive dataset of over 570GB of text data, including various sources such as books, articles, and websites.
advertisement
GPT 3.5 model has more than 175 billion parameters, and GPT 4 will have more parameters to make it faster and wiser.
GPT-3 was first introduced in June 2020. Since its introduction, ChatGPT regularly receives upgrades to improve its training data, fine-tune the model, and add new features to enhance its performance and accuracy. These updates are generally released periodically.