What Exactly Is Facebook Doing With AI?

If the idea of Facebook having high-powered, self-taught computers watching your every move on the site freaks you out a little bit, you may not want to look too far into their dedicated AI research lab. Facebook’s photo tagging, friend recommendations, fake-news filters, timeline sorting, and many other features all depend on some version of AI. It makes sense, given that its 2.19 billion monthly active users would be impossible for any human team to handle on their own, but the sheer scale and rate at which Facebook is building AI into its products makes it worth a look.

Image recognition


Face identification and auto-tagging are only scratching the surface of Facebook’s machine learning capabilities. They’ve actually been using a dataset of 3.5 billion tagged Instagram photos to train their software to identify what’s in a photo, whether it’s a beach (#beachlife!) or a cat (#lolcats).

This isn’t just for kicks – not only can it tag and categorize your photos, it can also provide keywords to help describe images to the visually impaired and check for inappropriate/offensive content (even if it does get a bit overzealous sometimes). They’re even building a tool to help figure out human poses, which could become a powerful way to make guesses about user mood and behavior, which could get a bit creepy. But we’ve gotten used to a lot.

Recommendations and sorting


Facebook recommends friends, sure, but the suggestions don’t stop there. It also recommends timeline posts, news, events, groups, pages, products, and more. Most of the content you see on your page is showing up because a machine-learning algorithm decided you would like it and prioritized it for you. This can get fairly political, though, as has been evidenced by a few upsets over fake news, filter bubbles, and general bias centered around major elections.

Content moderation


Though these systems are still very much works in progress, recent events have prompted Facebook to push harder for strong content-filtering systems that can identify fake news and hate speech. They can keep an eye out for links or text that might be propagating false or radical information and remove it. Apparently, these algorithms have been most successful at finding and deleting terrorist propaganda/recruitment content, catching over 99%.



Modern AI is getting pretty good at figuring out what humans are saying. The next step is more to figure out how they’re saying it. Having just acquired Wit.ai, a natural-language processing startup from London, Facebook is looking to upgrade their ability to discern context and meaning more accurately to help them fight things like fake news and hate speech. They’re also working on improving their ability to interact with users in different languages and improving translations.

Among other applications, Facebook is actually using AI to determine when someone posts suicidal thoughts on Facebook, contacting their friends and first responders when necessary. By their report, this has already begun to save lives, and it shows how powerful AI can be in a setting where it has access to human psychological data.

Playing games


Games are a great way to test AIs. Drop them into an artificial situation and see how they do versus other computers or humans and how well they can actually learn. Facebook has ELF OpenGo, which is similar to Google’s Alpha Go Zero, as well as the broader ELF project, which provides a platform for AI game research. They’ve even developed an AI platform to help conduct research on AIs playing StarCraft.

Research and development


Facebook’s primary AI site doesn’t present you with a bunch of flashy marketing materials about the future. There’s a lot of pretty serious stuff going on, though, as you might infer from the number of projects and teams. They’ve developed tools like PyTorch and (with Microsoft) ONNX, which are open-source contributions to AI research in general. They also join most of the other major AI companies in the Partnership on AI, with the goal of using AI to benefit society and developing it responsibly.

So what is Facebook going to do with all this power?

Your Facebook experience has undoubtedly been improved by AI, and chances are that it will come to improve other parts of your life as well, given that a lot of Facebook’s research is open and can be used by other researchers and developers. But the company does have a tendency to push too far too fast, and AI is another avenue where that could go wrong.

If it feels a bit like a sci-fi dystopia to have robots checking up on your behavior, monitoring your psychology, and moderating your interactions, you’re not wrong. Targeted ads are already using guesses about you to sell you things, but what if Facebook begins using machine learning to figure out how to manipulate your mood before showing you an ad? Perhaps a series of posts, colors schemes, or subtle nudges stimulate hunger right before suggesting a pizza delivery? Waiting until your mood is empathetic to promote a charity? It may be more of a real and imminent concern than you think.

So Facebook is going to become Skynet?

Well, there’s already a company named Skynet, so Facebook would have to acquire them first. But then maybe they will take over the human race, not with killer robots, but with gentle nudges. More likely, though, we’ll get some amazingly beneficial things from Facebook’s AI (I still think social media, on balance, has done more good than harm), as well as some things that make us even madder than Cambridge Analytica. Even AI can’t predict the future (yet), so we’ll just have to see which timeline we end up in.

Image credit: Game of Go in our club.

Andrew Braun Andrew Braun

Andrew Braun is a lifelong tech enthusiast with a wide range of interests, including travel, economics, math, data analysis, fitness, and more. He is an advocate of cryptocurrencies and other decentralized technologies, and hopes to see new generations of innovation continue to outdo each other.


  1. There is one major flaw with the presumption that AI, Facebook’s or anybody else’s, will get rid of fake news, terrorist recruiting, hate speech, bullying, etc. AI, in spite of having “intelligence” in its name is actually very dumb, just like any other computer program. AI has to be taught just as children have to be taught. Children are very impressionable and during the learning process they pickup the attitudes, opinions, biases and prejudices of any adults they come in contact with. In a similar fashion, the programmers that are teaching the AI programs will impart their biases, opinions and prejudices to those programs. To think of AI as some impartial deus ex machina that will save humanity from itself is ludicrous. AI will learn its morality/ethics from the people that create it..

    “But then maybe they will take over the human race, not with killer robots, but with gentle nudges.”
    It ain’t gonna be “gentle nudges” with Zuckerberg directing the take over. And WHO says the human race wants or needs to be taken over?!

    “So Facebook is going to become Skynet?”
    Maybe not in name but in deeds. It is becoming Skynet with a heavy dose of Omni Consumer Products mixed in.

    1. AI/big data/machine learning will potentially be able to do a lot of fantastic things, but as with any new technology, the dangers are real! Specifically as to algorithms being philosophically influenced by their designers, that’s a real problem already! In the absence of an optimistic vision to counter it, it paints a rather dark picture, but you’d probably enjoy the book “Weapons of Math Destruction” by Cathy O’Neil.


  2. I, personally, have gotten off of Facebook long ago for a number of reasons. The relentless directed advertising, the angry people who use this platform to spread their peculiar and bizarre conceptions about a variety of subjects, and the people who spend their lives feeding this time-consuming activity makes me not want to be a part of this growing nightmare.

    People should realize that there are far more better pursuits than reading what some other person “thinks” about a particular subject. In reality, no one else really cares what person X “thinks”. Many of the comments are insulting, crude, ridiculous and even causes those with undeveloped minds, like school children, to harm themselves because some other child made a negative shaming comment about them.

    Originally, Facebook was an interesting idea, share some photos, say Hi to friends, etc., but uneducated feeble-minded (and often nasty) people have turned this (like so many other things) from something innocuous into a useless trash heap of words and pictures, preying on the mindless lives of millions.

    1. Unfortunately, there are ups and downs to every new technology! Facebook can be invaluable if you move around a lot and need an easy way to keep up with friends and family, and it’s made connecting with people and places a lot easier, but if used as a primary news source or a place to let off steam, it devolves pretty quickly.

      I don’t think it’s necessarily a failure at all, though! Often the negative aspects of a new technology get played up, even though the kinks generally get ironed out in the long run:

      Printed books: http://williamwolff.org/wp-content/uploads/2009/06/TrithemiusScribes.pdf
      The telephone: http://wondermark.com/true-stuff-telephone-menace
      Car radios: https://mentalfloss.com/article/29631/when-car-radio-was-introduced-people-freaked-out

      1. The general concept of FB is great. However, it has been subverted by Mark Zuckerberg to be a data harvesting tool. Whether it was designed as such from the beginning or whether Zuckerberg added the harvesting f8unctionality over time is irrelevant at this point. Connecting people has become secondary at FB.

        Internet and social media have given anybody with a computing device a voice. Now we are reaping the whirlwind of millions of ideologues, demagogues and people with very little to say, all demanding to be heard.

Comments are closed.