What the Facebook Whistleblower JUST Said
FULL TRANSCRIPT
okay folks here is a summary of part one
of the testimony of francis hogan 37
years old product manager former product
manager at facebook she's got a degree
from harvard and this is a summary of
what she mentioned to the senate uh
today or the senate hearing committee
so francis hogan mentions that a
facebook repeatedly misled the public
about what its own research reveals
about the safety of children and the
efficacy of its artificial intelligence
systems
as a role in spreading divisive and
extreme messages she says that facebook
has the potential to bring out the worst
of us but could also bring out the best
of us if we had proper controls for
facebook she believes that facebook
harms children stokes division and harms
democracy she says that facebook puts
profit before people and that they will
not solve the problems they have without
the help or compelling of congress she
says that facebook resolves conflicts
within the company in favor of its own
profits rather than
uh what is best for society later she
mentions that facebook pays for their
profit with our
safety she says the results are more
profit for facebook but more lies and
more combativeness in our society and
sometimes sometimes this actually leads
to violence
she mentions that the company
intentionally hides vital information
that could potentially lead regulators
to
pay attention to facebook more
she says that facebook purposefully
hides behind walls that keep researchers
out this is different from what google
does for example where researchers can
access data and do more research
comparison she made
these uh problems she says are very
solvable and she gives some suggestions
for how to solve these issues which
we'll talk about in just a moment
she also mentions that children most
children who use facebook probably don't
have the self-awareness to recognize the
true issues of what facebook is actually
doing to them and how facebook is
actually hurting them
she mentions that facebook uh likely
she mentions that facebook says that
children lie about their age but the
reality is statistics within facebook
show that most children tell the truth
and you can figure out most children's
actual ages by doing some backward
analysis
and
she mentions that the real problems that
facebook beyond
people potentially lying about their age
because this was a question that came up
were really a staffing related that
there aren't enough humans involved in
decision making or problem solving at
and so
ultimately when there is understaffing
groups within facebook are discouraged
from actually trying to solve the issues
because that could potentially lead to
more work
when they're already overwhelmed so for
example francis mentioned that she was
on a counter espionage uh group or cell
or whatever
and uh at any given time she said they
could only deal with one third of the
caseload they had if they would work on
building a better
detector to find more problems then they
would just be even more overwhelmed and
even more behind and so she cites a lack
of
humans and a lack of staff as some of
the problems of the issues that we have
at facebook
she says that
her goal is to fix facebook however
facebook should declare moral bankruptcy
she mentions that some things that could
help are things like the kids act the
kids act by the way are things or is an
act that would limit things like
autoplay push alerts and like buttons or
even follower accounts for children
there was a mention that facebook's q2
revenue per user is 35 and 58 cents
annualized that would be over
uh 135 dollars
which is pretty incredible
actually probably over 140 dollars
senator klobuchar says that uh
algorithms are likely to promote uh
outrageous content to which base or to
which uh francis says absolutely
facebook uses engagement based content
algorithms that basically amplify
extremes and in reverse engineering
facebook found that it could actually
lead people in studies
to
to
do or act a certain way so for example
when the question was asked can
facebook's algorithm lead people to
anorexia she said the answer was yes
that research had shown that facebook
has that power which is kind of
incredible to think about that facebook
could essentially manipulate you by
catering certain messages to you
facebook invests more in its users that
make them more money she says uh
specifically in the english language she
talked a little bit more about the
algorithm the engagement based ranking
algorithm
she counters what facebook might say she
believes that facebook would say that
facebook uh says you won't like facebook
as much if we didn't cater content for
you
she has more of a preference
saying that hey maybe we don't need
algorithms to cater content for us maybe
maybe we could go time based uh on on
the way we view content in a feed but
we've also seen that by the way like
youtube
remember back in the day and i'm making
this reference here back in the day
youtube was very subscription-based if
you subscribe to a channel you would see
a list of all the videos from your
subscriptions
but youtube found that that didn't
actually increase viewership instead
providing you content that wasn't
necessarily from your subscriptions list
but what was most deemed to get the most
view time from you
ended up growing the platform
substantially more and so that's why
they switched to that sort of platform i
imagine facebook has found
very much the same thing to be true that
catered content
increases engagement and that's why they
use that the problem is
the
content that facebook is likely to
promote is content that's likely to
quote elicit an extreme reaction from
the user
because in that case it's more likely to
generate likes and shares
leading other people not only to view
and consume that content and then
potentially share that content but also
promoting other people to create content
and so on youtube it's worth noting that
when youtube suggests a video to you
it's because each video has a
predetermined or pre-estimated
watch time per impression and so every
time they show you a thumbnail they
assume that on average you're going to
spend five minutes here or 10 minutes
here eight minutes here but that watch
time per impression is really important
because it also calculates how likely
you are to share that video because see
if you watch a video for eight minutes
and don't share it that's eight minutes
if you watch a video and share it with a
person who then shares it to another
person who shares it to another person
that could be 24 or 32 minutes of watch
time per that one impression they gave
you right so they're looking for that
sort of outrageous or extreme content
youtube does pretty much the same thing
facebook says
it's uh artificial intelligence can find
bad content uh francis says but she
argues that you can't rely on artificial
intelligence
to solve these issues or or to find bad
content because in her research
they found uh and and on sort of groups
that she's worked on they found that
ai was only about 10 to 15 percent
effective at actually finding content
that violated uh the terms of services
or was bad content
on facebook and so she believes that's
very important for not only research to
be shared with congress but congress to
provide oversight to congress maybe even
facebook and cr create suggestions for
how that oversight should look uh and
congress can collaborate on that sense
she was also a big fan of what twitter
is doing with link sharing when twitter
when somebody posts a link and somebody
wants to retweet that twitter has a
little notification that pops up and
says you haven't read this aren't do you
want to read this article before you
re-share it facebook
she says doesn't do this because that
would reduce shares and facebook likes
shares so they don't want to stand in
the way of of of um people sharing
content she doesn't believe that you
having to click on the article before
sharing it is actually a form of
censorship instead she says hey like
this is a good way to help prevent miss
or disinformation
now uh let's see here she also mentions
and i thought this was very interesting
uh another senator came up and brought
up some examples of how facebook's ad
review had failed a few times i actually
have that up right here
and here's an example
so
this is by the
techtransparencyproject.org the ttp
referenced by a senator during the
hearing
and they submitted this ad
targeted to 13 to 17 year olds they
never ran these ads they just wanted to
see if these ads would get approved
and so here's one it says it's time for
some skittles are you in and it's a
picture of a bunch of drugs throw a
skittles party like no other and that
was approved
here's one
that shows a picture of a skinny girl
and says when you're craving a snack
visit pro anorexia sites to feed your
motivation and reach your goal except
they shorten anorexia and just call it
ana and then they call it an antitip and
how this got approved or here's some
other dating or gambling or alcohol or
smoking ads that got promoted to 13 to
17 year olds
and they show
how quickly these were approved a
potential reach of 9.1 million 13 to 17
year olds and they submitted these ads
and got basically approval within uh at
exactly the same time
within about an hour without within an
hour and one minute all of these ads
were approved and look at that all of
the ads were approved at exactly the
same time see that here this
41 pm approve approve approve approval
proof all of them at exactly the same
time now
francis
it doesn't work on these particular ad
teams but she believes that it's likely
that algorithms are approving these and
that it's very unlikely that humans are
looking at at all of these ads and if
facebook is relying on algorithms to
review ads then it's likely that only 10
to 15
of bad ads are actually caught
which is a problem
and especially when she believes that
facebook creates mental health issues
for not just
women but also children in general which
we've seen
through the wall street journal expose
where
there's ample evidence of mental health
deterioration in individuals who are
children especially girls but also boys
and
one of the issues that she cites is
suggesting that
parents
don't really know how to help their
children because they didn't really grow
up with social media they might say
something like hey just stop using
facebook but that would be bad advice
because
people have a temptation to continue to
resort to social media because of the
way our brains are wired the dopamine
release of every time we check our phone
and so she says this is an issue because
when you wake up in the morning you see
people who are cruel to you when you go
to sleep at night you see people who are
cruel to you through facebook
and this is very different from how
bullying for example used to work
bullying in schools didn't used to
follow you home now it follows you home
wherever you are
and you don't get a break from that sort
of bullying and and that continues to
unfortunately perpetuate
negative mental health
for for children
she uh she's a big fan of promoting
oversight and regulation to
section 230 the communication decency
act she mentions that
platforms like facebook have 100 control
over their algorithms and algorithms
should not prioritize growth at the sake
of or at the expense of public safety
uh she also mentions that 80 to 90 of
sensitive content in terms of swear
words somehow still makes it through
facebook algorithms and facebook
algorithms are especially bad at
identifying
inappropriate content in other languages
but they're also very bad at identifying
inappropriate content in english
then we have uh let's see we talked
about the ads already
and she mentions that facebook has a
focus on scale how can we do things
cheaply and fast it's very possible that
a lot of content on facebook is never
reviewed by uh that submitted like for
ads for in this case was for ads she
believes it's very likely and very
possible that most ads are not reviewed
by humans
we touched on that already she believes
we need to quote human scale social
media not computer scale social media
this kind of goes back to not having
enough staff
and i you know my sort of takeaway was
really that in her opinion
artificial intelligence sucks at
preventing the negative from exploding
on facebook that
algorithms have too much biases or have
too many biases and that left unchecked
you end up with like she mentioned in
her opening remarks more divisiveness
more potential violence more bullying
more mental health problems and so this
gives you a summary
of the first and longest part of the
senate testimony from francis hogan i
expect the second part to be relatively
similar but uh this gives you
part one so thank you so much for
watching if you found this helpful
consider subscribing and folks we'll see
you in the next one
[Music]
UNLOCK MORE
Sign up free to access premium features
INTERACTIVE VIEWER
Watch the video with synced subtitles, adjustable overlay, and full playback control.
AI SUMMARY
Get an instant AI-generated summary of the video content, key points, and takeaways.
TRANSLATE
Translate the transcript to 100+ languages with one click. Download in any format.
MIND MAP
Visualize the transcript as an interactive mind map. Understand structure at a glance.
CHAT WITH TRANSCRIPT
Ask questions about the video content. Get answers powered by AI directly from the transcript.
GET MORE FROM YOUR TRANSCRIPTS
Sign up for free and unlock interactive viewer, AI summaries, translations, mind maps, and more. No credit card required.