blog post

AI Safety, Variants, and Future

Officials from Downing Street are working urgently to finalize a joint statement from global leaders addressing the rising apprehensions about artificial intelligence.

This swift move is in preparation for the upcoming UK’s AI Safety Summit, set to be held at the iconic Bletchley Park next month.

The summit, meant to discuss the White House-mediated safety protocols and to deliberate on the oversight by national security agencies of potentially perilous AI variants, confronts an obstacle. It might not succeed in reaching an accord to form a new international body to oversee advanced AI, outside of its proposed joint statement.

The envisioned AI Safety Institute, an initiative of the UK government, seeks to oversee high-risk AI models in the context of national security. Yet, this vision might falter if there’s no global consensus.

Claire Trachet, a renowned tech expert and the head of advisory firm Trachet, commented:

“This represents a pivotal juncture for the UK, given the burgeoning AI aspirations across Europe. It’s crucial for the UK to harmonize its innovative zeal with judicious regulations that don’t curb its growth trajectory. The UK has the ingredients to lead in global tech, but it demands strategic action – ranging from research investments and strengthening supply networks to collaboration and talent cultivation, setting the UK as a leader in AI’s future.”

Presently, the UK is a major figure in global tech, with its AI sector estimated at around £16.9 billion, projected to skyrocket to £803.7 billion by 2035, per the US International Trade’s data.

The UK government’s dedication is evident in its £1 billion infusion into supercomputing and AI exploration. The rollout of seven AI regulatory principles – emphasizing accountability, inclusivity, choice, adaptability, fairness, and transparency – signals the government’s commitment to a resilient AI environment.

However, France is rapidly emerging as a European powerhouse in AI.

French magnate Xavier Niel recently pledged a €200 million investment in AI, which includes a research hub and a supercomputer, to amplify Europe’s global AI footprint.

This move resonates with President Macron’s strategy, who unveiled €500 million at VivaTech to forge new AI leaders. Additionally, France is enticing businesses via its AI summit.

Claire Trachet observes the escalating UK-France rivalry, noting that while competition complicates the UK’s ambitions, it could invigorate the sector. Still, Trachet underscores the UK’s need to harmonize innovation and judicious regulation to ensure continued growth.

“In my perspective, for Europe to etch a lasting imprint, there’s a need for pooling resources, endorsing partnerships, and cultivating a robust AI infrastructure,” Trachet elaborated.

“This entails amalgamating the might of the UK, France, and Germany, potentially crafting a compelling AI narrative in the ensuing decade, albeit demanding a shared vision and cooperation.”

What is obviously missing from these discussions to date is any definition of a mission statement or a set of OKRs that describe the desired outcomes and under whose guardrails would enforcement be represented. ChatGPT and AI and LLMs are a global asset class whose interactivity is connected via the Internet which contains no restrictions on whom may do what to whom.

It’s like trying to regulate breathing.

Froggy’s Wild Ride.

In August, OpenAI launched a new web crawler called “GPTBot” with the objective of enhancing the proficiency of upcoming GPT iterations.

The firm believes that the information collected by GPTBot might significantly improve model precision and broaden its functionalities, representing a considerable advancement in AI language models.

Web crawlers, sometimes termed web spiders, are crucial for cataloging online content. Prominent search platforms like Google and Bing depend on such bots to furnish their search outcomes with pertinent web entries.

Unlike conventional crawlers, GPTBot’s mission is specific: it aims to accumulate public data while diligently avoiding areas behind paywalls, personal information repositories, or content that doesn’t align with OpenAI’s guidelines.

For webmasters preferring to keep GPTBot at bay, a straightforward “disallow” directive in a server file will suffice. This provision lets them dictate what parts of their site the crawler can access.

The introduction of GPTBot comes shortly after OpenAI applied for a trademark for “GPT-5”, which is projected to be the successor to the existing GPT-4 iteration.

The application, submitted to the United States Patent and Trademark Office on July 18, covers the use of “GPT-5” in AI-driven speech and text functions, audio transcription, voice detection, and speech generation.

Even though the GPT-5 trademark news has sparked interest, OpenAI’s CEO, Sam Altman, has urged caution. He disclosed that the company isn’t close to starting GPT-5’s development, citing the necessity for thorough safety evaluations first.

Recently, OpenAI has encountered its fair share of scrutiny. Questions about its data gathering methods, especially related to copyright and permissions, have arisen.

In June, OpenAI received an advisory from Japan’s data privacy body about unsanctioned data aggregation. Earlier, Italy temporarily halted ChatGPT use, pointing to alleged breaches of EU privacy norms.

Both OpenAI and Microsoft are also currently entangled in a class-action lawsuit initiated by 16 individuals alleging unauthorized data access from ChatGPT user dialogues. Another lawsuit targets GitHub Copilot, with accusations of not giving proper credit after allegedly using developers’ codes.

If substantiated, these claims could place both companies in breach of the Computer Fraud and Abuse Act, a law relevant to web-scraping litigations.

As OpenAI propels AI innovations, it will have to adeptly address these issues, ensuring an ethical and conscientious approach in the evolving AI domain.

And as the entire industry is learning, the road ahead is twisty and freighted with obstacles ranging from regulations and safety guardrails seeking to limit reckless access and misinterpretation to prohibition-style violations by corporate end-users which will remain largely unenforceable to weaponization of GenAI engines, poisoned LLMs and rapid increases in adversarial attack vectors through automation, fidelity and replication.

Hang on to our hats. 

Author

Steve King

Managing Director, CyberEd

King, an experienced cybersecurity professional, has served in senior leadership roles in technology development for the past 20 years. He has founded nine startups, including Endymion Systems and seeCommerce. He has held leadership roles in marketing and product development, operating as CEO, CTO and CISO for several startups, including Netswitch Technology Management. He also served as CIO for Memorex and was the co-founder of the Cambridge Systems Group.

 

Get In Touch!

Leave your details and we will get back to you.