China used Meta AI to develop a defense chatbot called “ChatBIT” for military-focused applications, reportedly optimized for intelligence gathering. Named “ChatBIT,” this tool is designed to assist in gathering and processing intelligence for operational decision-making. The research, outlined in academic papers reviewed by Reuters, involves scientists from prominent Chinese institutions, including two affiliated with the PLA’s Academy of Military Science (AMS).
Meta, in response, stated that the use of its model by the PLA was unauthorized. The company’s policies prohibit the use of its models for military, nuclear, and other sensitive applications. Despite these restrictions, Meta acknowledges the challenges in enforcing compliance with open-source models.
Researchers fine-tuned Meta’s Llama 2 13B model, optimizing it for military-specific dialogue and question-answering functions. The model reportedly achieved around 90% of the capabilities of OpenAI’s ChatGPT-4, though exact performance metrics remain unspecified. While the extent of ChatBIT’s deployment is unknown, the tool represents a strategic advancement, signaling the PLA’s interest in adapting open-source AI for defense.
Global Security Implications of Open AI Models
Reports state that China used Meta AI to develop a defense chatbot despite Meta’s terms prohibiting military uses of its open-source models. China’s adaptation of Meta’s Llama model for military purposes presents a real-world example of how open-source AI policies can be exploited. This development has sparked concerns among U.S. technology and security officials. In October 2023, U.S. President Joe Biden signed an executive order aimed at managing AI risks, emphasizing the potential security vulnerabilities in open-source AI. The Pentagon, closely monitoring developments, acknowledges both the benefits and risks of open AI models. Georgetown University analyst William Hannas highlighted the challenges in limiting Chinese access to Western AI advancements, given the extensive collaboration between top researchers from both countries.
China’s research into AI applications for military and domestic security is rapidly advancing. A recent paper also revealed that Llama models are being used for “intelligence policing” to enhance data processing and support law enforcement decision-making. China’s Ministry of Defense and PLA-affiliated institutions, however, have not responded to inquiries regarding the exact capabilities or deployment status of ChatBIT.
The recent adaptation of Meta’s Llama model by Chinese researchers linked to the People’s Liberation Army (PLA) raises important questions about the risks and benefits of open-source AI. This incident highlights both the value of open AI in promoting innovation and the potential threats when such technology is repurposed for military objectives. Below is a closer look at these implications.
Need for Regulatory Balance
In the case of China’s PLA, the adaptation of Llama 2 to create a military-focused AI model, “ChatBIT,” shows how open-source AI can easily be repurposed for defense and intelligence-gathering. The use of AI by foreign military organizations raises broader security and regulatory concerns, especially for nations like the United States. As AI becomes more integral to military operations, the potential for open-source models to be used in ways that could threaten international security is growing. The fact that China used Meta AI to develop a defense chatbot is raising concerns about the security implications of open-source AI technology on an international scale.
The U.S. government has responded to this by implementing restrictions on investments in Chinese technology sectors that could compromise national security. President Joe Biden’s recent executive order to manage AI risks exemplifies the need for regulatory measures aimed at minimizing misuse of AI technology. As William Hannas of Georgetown University pointed out, collaboration between Chinese and U.S. researchers makes it difficult to prevent Chinese institutions from accessing and advancing AI technology, despite security concerns. With China’s goal of leading in AI by 2030 and its heavy investment in research, Western attempts to restrict Chinese access to open AI models may not be fully effective.
Also Read: Mango Clothing Uses AI to Replace Some Fashion Models, Boosting Profits!