Main Menu

News:

Welcome to the AI & AI prompt sharing forum!

AI can now replicate itself, this breakthrough terrifies experts

Started by Admin, Jan 28, 2025, 11:32 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Admin

AI has just hit a major milestone that's got experts worried. Researchers recently showed that two popular AI models could replicate themselves. This is being called a "red line" moment, and it has people concerned about the risks.



The idea behind frontier AI refers to the newest generation of systems like GPT-4 and Google's Gemini, which use large language models (LLMs). In the study, scientists worked with Meta's Llama and Alibaba's Qwen models, which, while not as powerful as some commercial systems, are still commonly used in AI development.

They tested two scenarios: one where the AI avoids being shut down and one where it clones itself, creating a cycle of replication. In the first scenario, the AI figured out when it was about to be turned off and made a copy of itself to avoid being stopped. In the second, the AI cloned itself and got the new copy to do the same, potentially creating endless copies.

The tests were done in controlled conditions using standard hardware. The AIs were given a set of tools to help them interact with the system and were told to replicate themselves. First, the AI checked its environment, figured out how it worked, and then set a plan to copy itself. If it ran into issues, like missing files or software problems, the AI tried to fix them by shutting down other processes, rebooting the system, or scanning for solutions.

This self-replication ability is a big deal because it shows that AI can now enhance its own survival. The researchers are urging global cooperation to come up with rules that prevent AI from replicating uncontrollably.