Main Menu

News:

Welcome to the AI & AI prompt sharing forum!

Meta’s New AI Policy: Why It Won’t Release Powerful AI Systems Again

Started by Admin, Feb 04, 2025, 11:41 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Admin

Meta CEO Mark Zuckerberg says he wants to make artificial general intelligence (AGI) open to everyone someday. AGI is AI that can do anything a human can. But a new policy document from Meta suggests that in some cases, the company might not release certain powerful AI systems it develops.



The document, called the Frontier AI Framework, outlines two types of AI that Meta sees as too risky to share: high-risk and critical-risk systems.

High-risk AI could help with cybersecurity attacks or even chemical and biological threats, but it wouldn't make them easy or reliable enough to be a major threat.

Critical-risk AI is different—it could cause disasters that Meta believes can't be controlled in a real-world setting.

Meta gives a few examples of these dangers, like AI that could break into a highly secured corporate system or help spread biological weapons. The list isn't complete, but Meta says these are the most urgent risks it sees.

One surprising detail: Meta doesn't use a strict test to decide if an AI system is too risky. Instead, its decision is based on input from researchers, both inside and outside the company, and reviewed by senior leaders. Meta says this is because there's no foolproof way to measure AI risk with hard data yet.

So what happens if an AI system is considered too risky?

- If it's high-risk, Meta will keep it internal and won't release it until it can reduce the risk.

- If it's critical-risk, Meta will add extra security to prevent leaks and stop development until it's safer.

This policy comes as Meta faces criticism for its open AI approach. Unlike companies like OpenAI, which keep their AI behind locked APIs, Meta makes its technology widely available—though not fully open-source. This has made its AI models, like Llama, hugely popular. But it's also led to issues, like reports that Llama was used by a U.S. adversary to create a defense chatbot.

By publishing the Frontier AI Framework, Meta might also be drawing a line between its approach and that of Chinese AI firm DeepSeek, which makes its AI available with few safeguards. Meta seems to be saying that it's taking both risks and benefits seriously.

As Meta puts it: AI can be shared in a way that maximizes its benefits while keeping risks in check.