Facebook has announced that it is opening two new artificial intelligence research labs in Pittsburgh and Seattle. The labs will include professors hired from the University of Washington and Carnegie Mellon University. This has prompted some fears that Facebook is poaching the instructors needed to train the next generation of A.I. researchers.
“It is worrisome that they are eating the seed corn” Dan Weld, a computer science professor at the University of Washington told the New York Times. “If we lose all our faculty, it will be hard to keep preparing the next generation of researchers.”
Experts in the field of A.I. and machine learning can often command extremely high salaries, making it difficult for universities and other non-profit research centers to compete with the likes of Facebook and Google.
In a recent post, Facebook’s director of A.I. research, Yann LeCun, says that that company’s goals have been misinterpreted. Rather than poach qualified experts from universities,
“Professors gain a different type of experience in industry that can have a positive impact on their students and on their research,” LeCun said. “Additionally, their connection with industry helps produce new scientific advances that may be difficult to achieve in an academic environment, and helps turn those advances into practical technology. Universities are familiar with the concept of faculty with part-time appointments in industry. It is common in medicine, law, and business. ”
LeCun stressed that the company’s goal with its FAIR program was to create a healthy partnership between Facebook and the universities which contributed to its research labs.
“Unlike others, we work with universities to find suitable arrangements and do not hire away large numbers of faculty into full-time positions bottled up behind a wall of non-disclosure agreements,” he added. “We contribute to local ecosystem.”
Facebook itself has plenty of reasons to be investing in A.I. Many of the company’s latest initiatives, such as photo and video sorting ,are reliant on machine learning. The social network is also experimenting with A.I. that can read text in order to help filter out hate speech and extremist organizations.
- Optical illusions could help us build the next generation of AI
- How will we know when an AI actually becomes sentient?
- The funny formula: Why machine-generated humor is the holy grail of A.I.
- Nvidia’s new voice A.I. sounds just like a real person
- Here’s what a trend-analyzing A.I. thinks will be the next big thing in tech