0

    Anthropic says no client data used in AI training

    2024.01.07 | exchangesranking | 211onlookers

    Generative artificial intelligence (AI) startup Anthropic has promised not to use client data for large language model (LLM) training, according to updates to the Claude developer’s commercial terms of service. The company also pledges to support users in copyright disputes.

    Anthropic, led by ex-OpenAI researchers, revised its commercial terms of service to clarify its stance. Effective from January 2024, the changes state that Anthropic’s commercial customers also own all outputs from using its AI models. The company “does not anticipate obtaining any rights in Customer Content under these Terms.”

    OpenAI, Microsoft and Google pledged to support customers facing legal issues due to copyright claims related to the use of their technologies in the latter half of 2023.

    Anthropic has committed similarly in its updated commercial terms of service to protect customers from copyright infringement claims arising from the authorized use of the company’s services or outputs. Anthropic stated:

    “Customers will now enjoy increased protection and peace of mind as they build with Claude, as well as a more streamlined API that is easier to use.” 

    As part of its legal protection pledge, Anthropic said it will pay for any approved settlements or judgments resulting from its AI’s infringements. The terms apply to Claude API customers and those using Claude through Bedrock, Amazon’s generative AI development suite.

    Related: Google taught an AI model how to use other AI models and got 40% better at coding.

    The terms state that Anthropic does not plan to acquire any rights to customer content and does not provide either party with rights to the other’s content or intellectual property by implication or otherwise.

    Advanced LLMs like Anthropic’s Claude, ChatGPT-4 and Llama 2 are trained on extensive text data. The effectiveness of LLMs depends on diverse and comprehensive training data, improving accuracy and contextual awareness by learning from various language patterns, styles and new information.

    Universal Music Group sued Anthropic AI in October 2023 over copyright infringements on “vast amounts of copyrighted works - including the lyrics to myriad musical compositions” that are under the ownership or control of the publishers.

    Meanwhile, author Julian Sancton is suing OpenAI and Microsoft for allegedly using the nonfiction author’s work without authorization to train AI models, including ChatGPT.

    Magazine: Top AI tools of 2023, weird DEI image guardrails, ‘based’ AI bots: AI Eye

    The content on this website comes from the Internet. Due to the inconvenience of proofreading the authenticity and accuracy of the copyright or content of some content, it may be temporarily impossible to confirm the authenticity and accuracy of the copyright or content. For copyright issues or other ssues caused by this, please Call or email this site. It will be deleted or changed immediately after verification.