It’s no secret that ever since Microsoft integrated ChatGPT into its various services, the company and OpenAI have been hard at work trying to develop the next step in conversational AI, i.e. GPT-4. Now, in an effort to compete with OpenAI and Microsoft on the next frontier, Google has announced a new state-of-the-art language model called PaLM 2, which is capable of doing various tasks, including math, coding, reasoning, multilingual translation, and natural language generation.
Google’s senior research director, Slav Petrov, stated that they trained PaLM 2 on multilingual texts from over 100 languages, which not only gives it an edge in understanding idioms and phrases in different languages but also makes it better at reasoning and common sense. This is an important development since these AI models often generate fake information that sounds like the truth.
Moreover, to make PaLM 2 more suitable for its enterprise customers, Google has also created different versions of the system to cater to specific needs. These versions include the Med-PaLM 2, which was trained on health data to answer questions similar to those found in the US Medical Licensing Examination to an “expert” level, and Sec-PaLM 2, a version which can help detect threats in code and explain the behaviour of potential malicious scripts.
Google says it is already using PaLM 2 to power 25 features and products, including its experimental chatbot, Bard, as well as Google Workspace apps like Docs, Slides, and Sheets. Additionally, to make it easier for phones to run the AI system, Google has also developed a light version of PaLM 2 called Gecko, which processes 20 tokens per second.
PaLM 2’s successor is already in the works
Although Google’s new PaLM 2 language model is already one of the most sophisticated AI systems, the company is already working on a successor to PaLM 2, called Gemini, which will be even more efficient and multimodal.
However, this race to build the most advanced AI system has sparked debates about the potential threats of these systems, including abuse, manipulative language, and lies. As a result, companies like Google and OpenAI will need to put stringent measures in place to ensure that this rapid development does not come at the cost of these AI systems going off the rails.