The company’s focus is on AI safety, with the goal of developing more predictable, reliable, and steerable AI systems. The collaborative team hopes to minimize bias in the AI model and solve some of the present limits of conversational AI.
“At Google, we believe it is critical to explore AI boldly and ethically,” said James Manyika, the company’s senior vice president of technology and society. “We are dedicated to creating and delivering helpful and beneficial applications while adhering to responsible principles based on human values and safety.”
Google will have access to Anthropic’s flagship conversational assistant AI product, Claude, as a result of this collaboration. It’s an intelligent agent chatbot that understands and responds to natural language, similar to how conversational AI models like ChatGPT work, but it’s only available in closed beta through a Slack integration right now.
Anthropic refers to the process of developing Claude as “Constitutional AI,” a learning model for AI that employs both reinforcement learning and supervised learning to evaluate its behavior in order to create a more “harmless AI assistant.” According to the research report on the issue, it employs self-improvement to detect and eliminate harmful outputs.