Skip to main content

Google might finally have an answer to Chat GPT-4

Google has announced the launch of its most extensive artificial intelligence model, Gemini, and it features three versions: Gemini Ultra, the largest and most capable; Gemini Pro, which is versatile across various tasks; and Gemini Nano, designed for specific tasks and mobile devices. The plan is to license Gemini to customers through Google Cloud for use in their applications, in a challenge to OpenAI’s ChatGPT.

Gemini Ultra excels in massive multitask language understanding, outperforming human experts across subjects like math, physics, history, law, medicine, and ethics. It’s expected to power Google products like Bard chatbot and Search Generative Experience. Google aims to monetize AI and plans to offer Gemini Pro through its cloud services.

“Gemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research,” wrote CEO Sundar Pichai in a blog post on Wednesday. “It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across, and combine different types of information including text, code, audio, image, and video.”

An infograph showcasing how Google's Gemini Ai is more efficient than ChatGPT.
Google

Starting December 13, developers and enterprises can access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI, while Android developers can build with Gemini Nano. Gemini will enhance Google’s Bard chatbot, using Gemini Pro for advanced reasoning, planning, and understanding. An upcoming Bard Advanced, using Gemini Ultra, is set to launch next year, and will likely be positioned to challenge GPT-4.

Despite questions about monetization with Bard, Google emphasizes creating a good user experience and does not provide specific details about pricing or access to Bard Advanced. The Gemini model, particularly Gemini Ultra, has undergone extensive testing and safety evaluations, according to Google. While it is the largest model, it is claimed to be more cost-effective and efficient than its predecessors.

Google also introduced its next-generation tensor processing unit, TPU v5p, for training AI models. The chip promises improved performance for the price compared to TPU v4. This announcement follows recent developments in custom silicon by cloud rivals Amazon and Microsoft.

The launch of Gemini, after a reported delay, underscores Google’s commitment to advancing AI capabilities. The company has been under scrutiny for how it plans to turn AI into profitable ventures, and the introduction of Gemini aligns with its strategy to offer AI services through Google Cloud. The technical details of Gemini will be further outlined in a forthcoming white paper, providing insights into its capabilities and innovations.

Editors' Recommendations

Kunal Khullar
A PC hardware enthusiast and casual gamer, Kunal has been in the tech industry for almost a decade contributing to names like…
Researchers just unlocked ChatGPT
ChatGPT versus Google on smartphones.

Researchers have discovered that it is possible to bypass the mechanism engrained in AI chatbots to make them able to respond to queries on banned or sensitive topics by using a different AI chatbot as a part of the training process.

A computer scientists team from Nanyang Technological University (NTU) of Singapore is unofficially calling the method a "jailbreak" but is more officially a "Masterkey" process. This system uses chatbots, including ChatGPT, Google Bard, and Microsoft Bing Chat, against one another in a two-part training method that allows two chatbots to learn each other's models and divert any commands against banned topics.

Read more
OpenAI and Microsoft sued by NY Times for copyright infringement
A phone with the OpenAI logo in front of a large Microsoft logo.

The New York Times has become the first major media organization to take on AI firms in the courts, accusing OpenAI and its backer, Microsoft, of infringing its copyright by using its content to train AI-powered products such as OpenAI's ChatGPT.

In a lawsuit filed in Federal District Court in Manhattan, the media giant claims that “millions” of its copyrighted articles were used to train its AI technologies, enabling it to compete with the New York Times as a content provider.

Read more
Here’s why people are claiming GPT-4 just got way better
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

It appears that OpenAI is busy playing cleanup with its GPT language models after accusations that GPT-4 has been getting "lazy," "dumb," and has been experiencing errors outside of the norm for the ChatGPT chatbot circulated social media in late November.

Some are even speculating that GPT-4.5 has secretly been rolled out to some users, based on some responses from ChatGPT itself. Regardless of whether or not that's true, there's definitely been some positive internal changes over the past behind GPT-4.
More GPUs, better performance?
Posts started rolling in as early as last Thursday that noticed the improvement in GPT-4's performance. Wharton Professor Ethan Mollick, who previously commented on the sharp downturn in GPT-4 performance in November, has also noted a revitalization in the model, without seeing any proof of a switch to GPT-4.5 for himself. Consistently using a code interpreter to fix his code, he described the change as "night and day, for both speed and answer quality" after experiencing ChatGPT-4 being "unreliable and a little dull for weeks."

Read more