Skip to main content

Biden uses an executive order to open federal sites for AI

inside of a data center
panumas nikhomkhai / Pexels

President Biden signed an executive order Tuesday designed to ensure that the AI industry will have plenty of compute and electrical power in the coming years by making federal lands available to expansive data centers and clean energy production facilities.

Specifically, the order directs federal agencies to fast-track large-scale AI infrastructure projects on federal land, make more federal sites available for data center and energy production projects, as well as integrate the new infrastructure into the local power grid. Both the Department of Energy and the Department of Defense are to each find three sites within their holdings where private companies might be able to build AI data centers before running “competitive solicitations” from prospective builders on those sites.

Recommended Videos

The order isn’t just a blank check for new AI projects; it imposes numerous safeguards and criteria on developers for how these projects can be built. These include requiring firms to pay for their facilities’ construction, as well as provide sufficient “clean energy” capabilities to fully power the data centers once they come online.

The race to achieve artificial general intelligence, the relentless drive to train ever larger language models in hopes the U.S. can beat out China for global leadership of the technology’s development has caused the electrical and cooling requirements for AI data centers to skyrocket in recent years. A Department of Energy report from December estimated that data center electricity draws have tripled in the last 10 years and are on pace to as much as triple again by 2028.

A new report from JLL doesn’t paint a much rosier picture, the real estate management firm figures data center power demands will only double by 2029. What’s more, current data center infrastructure tends to be geographically clustered, which strains local power grids, “distorting” how that power is delivered to customers and increasing the likelihood of brownouts.

AI will have a “profound implications for national security and enormous potential to improve Americans’ lives if harnessed responsibly, from helping cure disease to keeping communities safe by mitigating the effects of climate change,” President Biden said in a prepared statement.  “However, we cannot take our lead for granted. We will not let America be out-built when it comes to the technology that will define the future, nor should we sacrifice critical environmental standards and our shared efforts to protect clean air and clean water.”

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Google puts military use of AI back on the table
First step of Gemini processing a PDF in Files by Google app.

On February 4, Google updated its “AI principles,” a document detailing how the company would and wouldn’t use artificial intelligence in its products and services. The old version was split into two sections: “Objectives for AI applications” and “AI applications we will not pursue,” and it explicitly promised not to develop AI weapons or surveillance tools.

The update was first noticed by The Washington Post, and the most glaring difference is the complete disappearance of any “AI applications we will not pursue” section. In fact, the language of the document now focuses solely on “what Google will do,” with no promises at all about “what Google won’t do.”

Read more
A new government minister for AI has yet to use ChatGPT
The ChatGPT website on an iPhone.

 

Ireland’s newly appointed minister for AI oversight has admitted that she’s never used ChatGPT and hasn’t yet downloaded the hot new chatbot DeepSeek to her phone, the Irish Independent reported on Tuesday.

Read more
European Union issues guidance on how to not violate the AI Act’s ‘prohibited use’ section
European Union

Companies worldwide now officially required to comply with the European Union's expansive AI Act, which seeks to mitigate many of the potential harms posed by the new technology. The EU Commission on Tuesday issued additional guidance on how firms can ensure their generative models measure up to the Union's requirements and remain clear of the Act's "unacceptable risk" category for AI use cases, which are now banned within the economic territory.

The AI Act was voted into law in March, 2024, however, the first compliance deadline came and passed just a few days ago on February 2, 2025.

Read more