0

    OpenAI set off an arms race and our security is the casualty

    2024.04.11 | exchangesranking | 51onlookers
    1205f261>

    Since ChatGPT launched in late 2022 and made artificial intelligence (AI) mainstream, everyone has been trying to ride the AI wave — tech and non-tech companies, incumbents and start-ups — flooding the market with all sorts of AI assistants and trying to get our attention with the next “flashy” application or upgrade. 

    With the promise from tech leaders that AI will do everything and be everything for us, AI assistants have become our business and marriage consultants, our advisors, therapists, companions, confidants — listening as we share our business or personal information and other private secrets and thoughts.

    The providers of these AI-powered services are aware of the sensitivity of these discussions and assure us that they are taking active measures to protect our information from being exposed. Are we really being protected?

    AI assistants — friend or foe?

    Research published in March by researchers at the University of Ber-Gurion showed that our secrets can be exposed. The researchers devised an attack that deciphers AI assistant responses with surprising accuracy, despite their encryption. The technique exploits a vulnerability in the system design of all major platforms, including Microsoft’s Copilot and OpenAI’s ChatGPT-4, except for Google’s Gemini.

    Related: Trading Bitcoin’s halving: 3 traders share their thoughts

    Furthermore, the researchers showed that once the attacker built a tool to decipher a conversation — for example, with ChatGPT — this tool could work on other services as well, and thus could be shared (like other hacking tools) and used across the board with no additional effort.

    This is not the first research pointing to security flaws in the design and development of AI assistants. Other studies have been floating around for quite a while. In late 2023, researchers from several U.S. universities and Google DeepMind described how they could get ChatGPT to spew out memorized portions of its training data merely by prompting it to repeat certain words.

    The researchers were able to extract from ChatGPT verbatim paragraphs from books and poems, URLs, unique user identifier, Bitcoin (BTC) addresses, programming codes and more.

    Adversaries could intentionally use crafted prompts or inputs to delude the bots to generate the training data, which may include sensitive personal and professional information.

    The security problems are even more acute with open-source models. A recent study showed how an attacker could compromise Hugging Face conversion service and hijack any model that submitted through the conversion service. The implications of such an attack are significant. The adversary could implant their own model instead, push malicious models to repositories or access private repositories datasets.

    To put things in perspective. The researchers have identified organizations such as Microsoft and Google, who combined have 905 models hosted on Hugging Face that received changes through the conversion service and might have potentially been at risk of an adversary attack and have been compromised.

    Things can worsen

    AI’s new capabilities may be alluring, but the more power one gives to AI assistants, the more vulnerable one is to an attack.

    Bill Gates, writing in a blog last year, described how an overarching AI assistant (what he termed an “agent”) will have access to all our devices — personal and professional — to integrate and analyze the combined information to act as our “personal assistant.”

    As Gates wrote in the blog:

    An agent will be able to help you with all your activities if you want it to. With permission to follow your online interactions and real-world locations, it will develop a powerful understanding of the people, places, and activities you engage in. It will get your personal and work relationships, hobbies, preferences, and schedule. You’ll choose how and when it steps in to help with something or ask you to make a decision.

    This is not science fiction, and it could happen sooner than we think. Project 01, an open-source ecosystem for AI devices, recently launched an AI assistant called 01 Light. “The 01 Light is a portable voice interface that controls your home computer,” the company wrote on X. It can see your screen, use your apps, and learn new skills”

    Project 01 described on X how its 01 Light assistant works. Source: X

    It might be quite exciting to have such a personal AI assistant. However, if the security issues are not promptly addressed, and developers are meticulously making sure that the system and code are “clean” from all possible vulnerabilities, there is a possibility that if this agent is adversely attacked your entire life could be hijacked — including information of any person or organization that is related to you.

    Can we protect ourselves?

    In late March, the U.S. House of Representatives set a strict ban on congressional staffers' use of Microsoft's Copilot.

    "The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services," said House Chief Administrative Officer Catherine Szpindor.

    In early April, the Cyber Safety Review Board’s (CSRB) published a report blaming Microsoft for “cascade of security failures” that enabled Chinese threat actors to access U.S. government officials’ emails in summer 2023, which was preventable and should never have happened.

    Related: Bad blockchain forensics convict the user of a Bitcoin mixer — as its operator

    As the report stated: "Microsoft has an inadequate security culture and requires an overhaul." This would most likely include security issues with Copilot.

    This is not the first ban on AI assistants. Technology companies such as Apple, Amazon, Samsung and Spotify along with financial institutions includingJPMorgan, Citi, Goldman Sachs and others have banned the use of bots for their employees.

    Major technology companies, including OpenAI and Microsoft, pledged last year to adhere to responsible AI.Since then, no substantial actions have been taken.

    Pledging is not enough. Regulators and policy makers should demand actions. In the meantime, we should refrain from sharing any sensitive personal or business information.

    And maybe if we — collectively stop using these bots until substantial actions have been taken to protect us, we might have a chance to be “heard” and force companies and developers to implement the needed security measures.

    Dr. Merav Ozair is a guest author for Cointelegraph and is developing and teaching emerging technologies courses at Wake Forest University and Cornell University. She was previously a FinTech professor at Rutgers Business School, where she taught courses on Web3 and related emerging technologies. She is a member of the academic advisory board at the International Association for Trusted Blockchain Applications (INATBA) and serves on the advisory board of EQM Indexes — Blockchain Index Committee. She is the founder of Emerging Technologies Mastery, a Web3 and AI end-to-end consultancy shop, and holds a PhD from Stern Business School at NYU.

    This article is for general information purposes and is not intended to be and should not be taken as legal or investment advice. The views, thoughts, and opinions expressed here are the author’s alone and do not necessarily reflect or represent the views and opinions of Cointelegraph.

    The content on this website comes from the Internet. Due to the inconvenience of proofreading the authenticity and accuracy of the copyright or content of some content, it may be temporarily impossible to confirm the authenticity and accuracy of the copyright or content. For copyright issues or other ssues caused by this, please Call or email this site. It will be deleted or changed immediately after verification.