What is Auto-GPT and why does it matter?

Silicon Valley’s quest to automate everything is unceasing, which explains its latest obsession: Auto-GPT.

In essence, Auto-GPT uses the versatility of OpenAI’s latest AI models to interact with software and services online, allowing it to “autonomously” perform tasks like X and Y. But as we are learning with large language models, this capability seems to be as wide as an ocean but as deep as a puddle.

Auto-GPT — which you might’ve seen blowing up on social media recently — is an open source app created by game developer Toran Bruce Richards that uses OpenAI’s text-generating models, mainly GPT-3.5 and GPT-4, to act “autonomously.”

There’s no magic in that autonomy. Auto-GPT simply handles follow-ups to an initial prompt of OpenAI’s models, both asking and answering them until a task is complete.

Auto-GPT, basically, is GPT-3.5 and GPT-4 paired with a companion bot that instructs GPT-3.5 and GPT-4 what to do. A user tells Auto-GPT what their goal is and the bot, in turn, uses GPT-3.5 and GPT-4 and several programs to carry out every step needed to achieve whatever goal they’ve set.

What makes Auto-GPT reasonably capable is its ability to interact with apps, software and services both online and local, like web browsers and word processors. For example, given a prompt like “help me grow my flower business,” Auto-GPT can develop a somewhat plausible advertising strategy and build a basic website.

As Joe Koen, a software developer who’s experimented with Auto-GPT, explained to TechCrunch via email, Auto-GPT essentially automates multi-step projects that would’ve required back-and-forth prompting with a chatbot-oriented AI model like, say, OpenAI’s ChatGPT.

“Auto-GPT defines an agent that communicates with OpenAI’s API,” Koen said. “This agent’s objective is to carry out a variety of commands that the AI generates in response to the agent’s requests. The user is prompted for input to specify the AI’s role and objectives prior to the agent starting to carry out commands.”

In a terminal, users describe the Auto-GPT agent’s name, role and objective and specify up to five ways to achieve that objective. For example:

  • Name: Smartphone-GPT
  • Role: An AI designed to find the best smartphone
  • Objective: Find the best smartphones on the market
  • Goal 1: Do market research for different smartphones on the market today
  • Goal 2: Get the top five smartphones and list their pros and cons

Behind the scenes, Auto-GPT relies on features like memory management to execute tasks, along with GPT-4 and GPT-3.5 for text generation, file storage and summarization.

Auto-GPT can also be hooked up to speech synthesizers, like ElevenLabs’, so that it can “place” phone calls, for example.

Auto-GPT is publicly available on GitHub, but it does require some setup and know-how to get up and running. To use it, Auto-GPT has to be installed in a development environment like Docker, and it must be registered with an API key from OpenAI — which requires a paid OpenAI account.

It might be worth it — although the jury’s out on that. Early adopters have used Auto-GPT to take on the sorts of mundane tasks better delegated to a bot. For example, Auto-GPT can field items like debugging code and writing an email or more advanced things, like creating a business plan for a new startup.

“If Auto-GPT encounters any obstacles or inability to finish the task, it’ll develop new prompts to help it navigate the situation and determine the appropriate next steps,” Adnan Masood, the chief architect at UST, a tech consultancy firm, told TechCrunch in an email. “Large language models excel at generating human-like responses, yet rely on user prompts and interactions to deliver desired outcomes. In contrast, Auto-GPT leverages the advanced capabilities of OpenAI’s API to operate independently without user intervention.”

In recent weeks, new apps have emerged to make Auto-GPT even easier to use, like AgentGPT and GodMode, which provide a simple interface where users can input what they want to accomplish directly on a browser page. Note that, like Agent-GPT, both require an API key from OpenAI to unlock their full capabilities.

Like any powerful tool, however, Auto-GPT has its limitations — and risks.

Depending on what objective the tool’s provided, Auto-GPT can behave in very… unexpected ways. One Reddit user claims that, given a budget of $100 to spend within a server instance, Auto-GPT made a wiki page on cats, exploited a flaw in the instance to gain admin-level access and took over the Python environment in which it was running — and then “killed” itself.

There’s also ChaosGPT, a modified version of Auto-GPT tasked with goals like “destroy humanity” and “establish global dominance.” Unsurprisingly, ChaosGPT hasn’t come close to bringing about the robot apocalypse — but it has tweeted rather unflatteringly about humankind.

Arguably more dangerous than Auto-GPT attempting to “destroy humanity” are the unanticipated problems that can crop up in otherwise perfectly normal scenarios, though. Because it’s built on OpenAI’s language models — models that, like all language models, are prone to inaccuracies — it can make errors.

That’s not the only problem. After successfully completing a task, Auto-GPT usually doesn’t recall how to perform it for later use, and — even when it does — it often won’t remember to use the program. Auto-GPT also struggles to effectively break complex tasks into simpler sub-tasks and has trouble understanding how different goals overlap.

“Auto-GPT illustrates the power and unknown risks of generative AI,” Clara Shih, the CEO of Salesforce’s Service Cloud and an Auto-GPT enthusiast, said via email. “For enterprises, it is especially important to include a human in the loop approach when developing and using generative AI technologies like Auto-GPT.”

Source

      Guidantech
      Logo
      Shopping cart