0

    Looking ahead: Industry insiders predict 2024 AI legal challenges

    2024.01.02 | exchangesranking | 133onlookers

    Over the last year, as artificial intelligence (AI) has become a more prominent tool for everyday use, the legal landscape around the technology has begun to develop. 

    From global regulations and laws starting to take shape to myriad lawsuits alleging copyright and data infringement, AI was on everyone’s radar.

    As 2024 approaches, Cointelegraph asked industry insiders working at the intersection of law and AI to help break down the lessons of 2023 and what they could mean for the year to come. For a comprehensive overview of what happened in AI in 2023, don’t forget to check out Cointelegraph’s “Ultimate 2023 AI Guide.”

    Delays in EU AI Act enforcement

    In 2023, the European Union became one of the first regions to make significant headway in passing legislation to regulate the deployment and development of high-level AI models.

    The “EU AI Act” was initially proposed in April and was passed by Parliament in June. On Dec. 8, European Parliament and Council negotiators reached a provisional agreement on the bill.

    Once fully effective, it will regulate government use of AI in biometric surveillance, oversee large AI systems like ChatGPT and set transparency rules developers should follow before entering the market.

    However, the bill has already received criticism from the tech sector for “over-regulation.”

    With pushback from developers and a track record of delays, Lothar Determann, partner at Baker McKenzie and author of Determann’s Field Guide to Artificial Intelligence Law, told Cointelegraph:

    “It does not seem entirely impossible that we might see a similarly delayed timeline with the enactment of the EU AI Act.”

    Determann pointed out that although the agreement was reached in early December, a final text has yet to be seen. He added that several politicians of key member states, including the French president, have expressed concern with the current draft.

    “This reminds me of the trajectory of the e-privacy regulation, which the EU announced in 2016 would take effect with the General Data Protection Regulation in May 2018, but which still has not been finalized five years later.”

    Laura De Boel, a partner in the Brussels office of law firm Wilson Sonsini Goodrich & Rosati, also pointed out that the December development is a “political agreement,” with formal adoption yet to come in early 2024.

    She explained further that EU lawmakers have included a “phased grace period,” during which:

    “The rules on prohibited AI systems will apply after six months, and the rules on General Purpose AI will apply after 12 months,” she said. “The other requirements of the AI Act will apply after 24 months, except that the obligations for high-risk systems defined in Annex II will apply after 36 months.”

    Compliance challenges 

    Despite a flurry of new regulations entering the scene, 2024 will present some challenges for companies in terms of compliance.

    De Boel said that the European Commission has already called on AI developers to voluntarily implement the key obligations of the AI Act even before they become mandatory:

    “They will need to start building the necessary internal processes and prepare their staff.”

    However, Determann said that even without a comprehensive AI regulatory scheme, “we’ll see compliance challenges as businesses grapple with the application of existing regulatory schemes to AI.”

    This includes the EU General Data Protection Regulation (GDPR), privacy laws around the world, intellectual property laws, product safety regulations, property laws, trade secrets, confidentiality agreements and industry standards, among others.

    To this note, in the United States, the administration of President Joe Biden issued a lengthy executive order on Oct. 30 intended to protect citizens, government agencies and companies by ensuring AI safety standards.

    The order established six new standards for AI safety and security, including intentions for ethical AI usage within government agencies.

    While Biden is quoted saying that the order aligns with the government’s principles of “safety, security, trust, openness,” insiders in the industry said it has created a “challenging” climate for developers.

    This primarily boils down to discerning concrete compliance standards out of vague language.

    In a previous interview with Cointelegraph, Adam Struck, a founding partner at Struck Capital and an AI investor, told Cointelegraph that the order makes it tricky for developers to anticipate future risks and compliance according to the legislation, which is based on assumptions about products that aren’t fully developed yet. He said:

    “This is certainly challenging for companies and developers, particularly in the open-source community, where the executive order was less directive.”

    Related: ChatGPT’s first year marked by existential fear, lawsuits and boardroom drama

    More specific laws

    Another anticipation in the legal landscape of 2024 is more specific, narrowly framed laws. This can already be seen as some countries deploy regulations against AI-generated deepfakes.

    Regulators in the U.S. are already considering introducing regulations on political deepfakes in the lead-up to the 2024 presidential elections. As of late November, India has begun finalizing laws against deepfakes.

    Determann cautioned AI-related businesses and those using AI products:

    “Moving forward, businesses will need to stay up-to-date on these developments, which will include disclosure requirements for bots, restrictions on ‘deepfakes’ and audit requirements for job application evaluation systems.”

    He continued to say that such specifically focused laws tend to have a “better chance” to result in the intended impact than overly broad regulations.

    “This is because businesses can understand and comply with them more easily, and authorities can enforce them more effectively,” he explained.

    There have also been rumblings that 2024 could see regulations focusing on investments in technology. Among lawmakers in the U.S., there has already been talk of regulating investments.

    In July, the U.S. Senate backed bipartisan legislation that requires local companies to report any investments in Chinese technologies, particularly semiconductors used in AI.

    Copyright clarification

    Many in the industry also expect to see a clarification of what is deemed copyright infringement and violation of copyright law.

    Over the last year, there have been a number of high-profile copyright-related lawsuits accusing leading AI developers like OpenAI, Microsoft, Meta and Google of abusing copyright laws.

    These lawsuits have covered almost every form of content, from art and music, to literature and news.

    Most recently, on Dec. 7, The New York Times filed a case against OpenAI and Microsoft for copyright infringement of their news content during the training of AI models. Prominent lawyers in the space have called this particular case a potential “watershed moment” for the AI and copyright space.

    With the majority of these cases ongoing, 2024 is expected to bring some sort of resolution to this issue — whether wholly or partially remains to be seen.

    What does AI have to say? 

    In the spirit of all things AI, Cointelegraph decided to ask ChatGPT itself what it believes will come of the legal scene surrounding AI in the upcoming year.

    When asked, “Can you give me some predictions for AI law in 2024?” ChatGPT responded with the following assumptions:

    More “strict regulations” globally about ethical concerns and biases. Developing clearer “liability standards” for AI-related incidents for developers and users.A heightened “emphasis on data privacy laws.”An “increased demand for transparency in AI algorithms.”The emergence of “specialized boards or regulatory bodies” dedicated to overseeing AI implementation.Potential updates to labor laws and new employment models with AI in consideration.Efforts to establish a “cohesive international framework for governing AI technologies.”Implementation of “measures to protect consumers” from misleading or harmful AI applications.

    Whether ChatGPT is on to something, only time will tell, and 2024 will be the year to inform us all. Make sure to watch this space in 2024 for the latest updates in all things AI.

    Magazine: 10 best long reads about crypto in 2023

    The content on this website comes from the Internet. Due to the inconvenience of proofreading the authenticity and accuracy of the copyright or content of some content, it may be temporarily impossible to confirm the authenticity and accuracy of the copyright or content. For copyright issues or other ssues caused by this, please Call or email this site. It will be deleted or changed immediately after verification.