Episodios
-
Happy Pride! In this episode we talk to Jenni Olsen, Senior Director of the Social Media Safety Program at GLAAD, about their work in promoting LGBTQ safety, privacy, and expression online.
We talk about the recently released Social Media Safety Index, which evaluates social media platforms on their LGBTQ-inclusive policies. Some suggestions from their report include training moderators to understand and differentiate legitimate LGBTQ speech and content from harmful content, and creating policies against conversion therapy content and targeted misgendering and deadnaming.
Finally, we talk about the challenges of balancing free speech with protecting marginalized communities and offer suggestions for individuals working at social media platforms to advocate for change.
Further reading
GLAAD’s reports, including the Accelerating Acceptance report we discuss
Trust in Tech - let’s talk about protecting the LGBTQ+ Community Online (last year’s Pride Podcast)
Alice’s practical Guide to Protecting LGBTQ+ Users Online
-
This episode is a listener Q&A featuring Cathryn Weems, seasoned Trust & Safety leader. We opened up our inbox to any possible question that anyone had aaaaand it was all about careers and job searching. It’s rough out there, and folks need help! We discuss networking as an introvert, how to market yourself on a resume and in an interview, how to job search, and more.
Mentioned in this episode:
Trust & Safety Tycoon
LinkedIn people to follow: Alice Hunsberger, Jeff Dunn, Leslie Taylor
Integrity Institute, All Tech is Human, TSPA
Previous job search episode
-
¿Faltan episodios?
-
Episode Description:
This episode features a discussion on the importance of trust and safety policies in online platforms, between Alice Hunsbger (VP of Trust & Safety and Content Moderation at PartnerHero) and Matt Soeth (Head of Trust & Safety at All Tech is Human and Senior advisor at Tremau).
Alice shares her experiences working on policy enforcement and emphasizes the need for clear communication with users and shares the impacts of policy on community. Additionally, Hunsberger provides insights about engaging with external stakeholders, updating policies for new technologies, and the role of policy in platform differentiation.
-
Many of us working at tech companies are having to make moral and ethical decisions when it comes to where we work, what we work on, and what we speak up about. In this episode, we have a conversation with Nadah Feteih around how tech workers (specifically folks working in integrity and trust & safety teams) can speak up about ethical issues at their workplace. We discuss activism from within the industry, compelled identity labor, balancing speaking up and staying silent, thinking ethically in tech, and the limitations and harms of technology.
Takeaways
Balancing speaking up and staying silent can be difficult for tech workers, as some topics may be divisive or risky to address.
Compelled identity labor is a challenge faced by underrepresented and marginalized tech workers, who may feel pressure to speak on behalf of their communities.
Thinking ethically in tech is crucial, and there is a growing need for resources and education on tech ethics.
Tech employees have the power to take a stand and advocate for change within their companies.
Engaging on social issues in the workplace requires a balance between different approaches, including staying within the system and speaking up from the outside.
Listening to moderators and incorporating local perspectives is crucial for creating inclusive and equitable tech platforms.
Disclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.
Mentioned in this episode:
Breaking the Silence: Marginalized Tech Workers’ Experiences and Community Solidarity
Black in Moderation
Tech Worker Handbook
No Tech For Apartheid
Tech Workers Coalition
Credits
Today’s episode was produced, edited, and hosted by Alice Hunsberger.
You can reach myself and Talha Baig, the other half of the Trust in Tech team, at [email protected].
Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.
-
You asked, we answered! It’s a rough time out there in the tech industry as so many people in Trust & Safety are job searching or thinking about their career and what it all means.
In this episode, Alice Hunsberger shares her recent job search experience and gives advice on job searching and career development in the Trust and Safety industry. Listener questions that are answered include:
How do I figure out what to do next in my career?
What helps a resume or cover letter stand out?
What are good interviewing tips?
What advice do leaders wish they had when they were first starting out?
Do T&S Leaders really believe we will have an internet free (or at least drastically) reduced of harm?
Resources and links mentioned in this episode:
Personal Safety for Integrity workers
Hiring and growing trust & safety teams at small companies
Katie Harbath’s career advice posts
Alice Links
Disclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.
Today’s episode was produced, edited, and hosted by Alice Hunsberger.
You can reach myself and Talha Baig, the other half of the Trust in Tech team, at [email protected].
Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.
-
Integrity workers are missing a shared resource where they can easily point to a taxonomy of harms and specific interventions to mitigate those harms. Enter, Grady Ward, a visiting fellow of the Integrity Institute, who discusses how he is creating a Wikipedia for and by integrity workers.
In typical Trust in Tech fashion, we also discuss the tensions and synergies between integrity and privacy, and if you stick around to the end, you can hear about some musings on the interplay of art and nature.
Links:
The Wikipedia of Trust and Safety
Grady’s personal website
Disclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.
Today’s episode was produced, edited, and hosted by Talha Baig.
You can reach myself and Alice Hunsberger, the other half of the Trust in Tech team, at [email protected].
Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.
-
With the Senate Child Safety Hearing on the horizon, we sit down with Vaishnavi, former head of Youth Policy at Meta and chat about the specific problems and current policy landscape regarding child safety.
Now, Vaishnavi now works an as advisor to tech companies and policymakers on youth policy issues!
Some of the questions answered on today’s show include:
- What are the different buckets of problems for Child Safety?
- How we can think about age appropriate designs?
- What are common misconceptions in the tension between privacy and child safety?
- What is the current regulation for child safety?
- What does she expect to hear at the judicial hearing?
Disclaimer: The views stated in this episode are not affiliated with any organization and only represent the views of the individuals.
-
Listen to this episode to learn how to stay safe as an Integrity worker
Links:
- Tall Poppy (available through employers only at the moment)
- DeleteMe
- PEN America Online Harassment Field Manual
- Assessing Online Threats
- Want a security starter pack? | Surveillance Self-Defense
- Yoel Roth on being targeted: Trump Attacked Me. Then Musk Did. It Wasn't an Accident.
- Crash override network: What To Do If Your Employee Is Being Targeted By Online Abuse
Practical tips:If you’re a manager
- Train your team on what credible threats looks like
- make sure you have a plan in place for dealing with threats to your office or employees
- Allow pseudonyms; don’t require public photos
- Invest in services that can help your employee scrub public data.
Individuals
- Keep your personal social media private/ friends-only.
- Use different photos on LinkedIn than your personal social media.
- Consider hiding your location online, not using your full name, etc.
Credits:
Today’s episode was produced, edited, and hosted by Alice Hunsberger.
You can reach myself and Talha Baig, the other half of the Trust in Tech team, at [email protected].
Our music is by Zhao Shen. Special thanks to all the staff at the Integrity Institute.
-
Caroline Sinders is a ML design researcher, online harassment expert, and artist. We chat about common dark tech patterns, how to prevent them in your company, a novel way to think about your career and how photography is related to generative AI.
Sinders has worked with Facebook, Amnesty International, Intel, IBM Watson, the Wikimedia Foundation,
We answer the following questions on today’s show:1. What are dark tech patterns… and how to prevent them
2. How to navigate multi stakeholder groups to prevent baking in these dark patterns?
3. What is a public person?
4.. What is a framework to approach data visualization?
5. How is photography an analogue to generative AI?
This episode goes in lots of directions to cover Caroline’s varied interests - hope you enjoy it!
-
An Introduction to Generative AI
In this episode, Alice Hunsberger talks with Numa Dhamani and Maggie Engler, who recently co-authored a book about the power and limitations of AI tools and their impact on society, the economy, and the law.In this conversation, they deep dive into some of the topics in the book, and discuss what writing a book was like, as well as what the process was to get to publication.
You can preoder the book here, and follow Maggie and Numa on LinkedIn.
-
It seems everyday we are pulled in different directions on social media. However, what we are feeling seldom resonates. Enter David Jay! A master in building movements including leading it for the Center Humane Technology. In this episode, we will learn precisely how to build a movement, and why communities are perpetually underfunded.
David Jay is an advisor of the Integrity Institute and played a pivotal role in the early days of the Institute. He is also currently the founder of Relationality Labs which hopes to make the impact of relational organizing visible so that organizers can be resourced for the strategic value that they create. In the past, he has had a diverse range of experiences, including founding asexuality.org, and as chief mobilization officer for the Center for Humane Technology.
Here are some of the questions we answer on today’s show:
1. How do you create, scale, and align relationships to create a movement?
2. How to structure stories to resonate?
3. How to keep your nose on the edge for new movements?
4. How to identify leaders for the future?
5. Why David Jay is excited by the Integrity Institute and the future of integrity workers?
6. Why community based initiatives don’t get funded at the same rate as non-community based initiatives.
Check out David Jay’s Relationality Lab!
Disclaimer: The views in this episode only represent the views of the people involved in the recording of the episode. They do not represent any other entity’s views.
-
Elections matter, and history has demonstrated online platforms will find themselves grappling with these challenges whether they want to be or not. The two key questions facing online platforms now, as they stare down the tsunami of global elections heading their way, are: Have they initiated an internal elections integrity program? And if so, how do they ensure the best possible preparation to safeguard democracies globally?
The Integrity Institute launched an elections integrity best practices guide on “Defining and Achieving Success in Elections Integrity.” This latest guide extends the first and provides companies – large or small, established or new-on-the-block – concrete details as they fully implement an elections integrity program.
Today on the podcast, we talk to four contributors about this guide: Glenn Ellingson, Diane Chang, Swapneel Mehta, and Eric Davis.
Also check out our first episode on elections!
-
Alice Hunsberger talks to Heather Grunkemeier again, this time covering Heather’s solution for dealing with creeps at Rover from a policy and operational lens, measuring trust, and what it’s been like for her to strike out on her own as a consultant.
Also check out our first episode with Heather, How to Find Your Place in Trust & Safety: A Story of Career Pivoting.
-
Alice Hunsberger talks to Heather Grunkemeier (former Program Owner of Trust & Safety at Rover, and current owner of consultancy firm Twinkle LLC) and discusses how Heather finally broke into the field of Trust & Safety after years of trying, what it was actually like for her, and what her advice is for other people in the midst of career pivots. We also touch on mental health, identity, self worth, and how working in Trust & Safety has unique challenges (and rewards).
If you liked our Burnout Episode, you may enjoy this one too. (And if you haven’t listened to it yet or read our Burnout resource guide, please check it out).
CreditsThis episode of Trust in Tech was hosted, edited, and produced by Alice Hunsberger.
Music by Zhao Shen.
Special thanks to the staff and members of the Integrity Institute for their continued support.
-
On today's episode, our host Talha Baig is joined by guest James Alexander to discuss all things AI liability. The episode begins with a discussion on liability legislation, as well as some of the unique situations that copyright law has created. Later in the episode, the conversation shifts to James's experience as the first member of Wikipedia's Trust and Safety team.
Here are some of the questions we answer in today’s episode.
Who is liable for AI-generated content?
How does section 230 affect AI?
Why does AI have no copyright?
How will negotiations play out between platforms and the companies building AI models?
Why do the Spiderman multiverse movies exist?
What did it look like to be the first trust and safety worker at Wikipedia?
What does fact-checking look like at Wikipedia?
-
On today's episode, our host Talha Baig is joined by guest David Harris, who has been writing about Llama since the initial leak. The two of them begin by discussing all things Llama, from the leak to the open-sourcing of Llama 2. Later in the episode, they dive deeper into policy ideas seeking to improve AI safety and ethics.
Show Links:
David’s Guardian Article
CNN Article Quoting David
Llama 2 release Article
-
What can companies do to support the LGBTQ+ community during this pride season, beyond slapping a rainbow logo on everything? Integrity Institute members Alex Leavitt and Alice Hunsberger discuss the state of LGBTQ+ safety online and off, how the queer community is unique and faces disproportionate risks, and what are some concrete actions that platforms should be taking.
Show Links:
Human Rights Campaign declares LGBTQ state of emergency in the US
Social Media Safety Index
Digital Civility Index & Our Challenge | Microsoft Online Safety
Best Practices for Gender-Inclusive Content Moderation — Grindr Blog
Tinder - travel alert
Assessing and Mitigating Risk for the Global Grindr Community
Strengthening our policies to promote safety, security, and well-being on TikTok
Meta's LGBTQ+ Safety center
Data collection for queer minorities
-
The acquisition of Twitter broke, well, Twitter. Around 90% of the workforce left the company leaving shells of former teams to handle the same responsibility.
Today, we welcome two guests from Twitter’s civic integrity team. We welcome new guest Rebecca Thein. Rebecca, was a senior engineering technical program manager for Twitter’s Information Integrity team. She is also a Digital Sherlock for the Atlantic Council’s Digital Forensic Research Lab (DFRLab).
Theodora Skeadas is a returning guest from our previous episode! She managed public policy at Twitter and was recently elected as an Elected Director of the Harvard Alumni Association.
We answer the following questions on today’s episode:How much was the civic integrity team hurt by the acquisition?
What are candidate labels?
How did Twitter prioritize its elections?
What did the org structure of Twitter look like pre and post acquisition?
And finally, what is this famous Halloween party that all the ex-Twitter folks are talking about?
-
This episode is a bit different – instead of getting deep into the weeds with a guest, we’re starting from the beginning. Our guest today, Pearlé Nwaezeigwe, aka the Yoncé of Tech Policy, chats with me about Tech Policy 101.
I get a lot of questions from people who are fascinated by Trust & Safety and Integrity work in tech, and they want to know – what does it look like? How can I do it too? What kinds of jobs are out there? So, I thought we’d tackle some of those questions here on the podcast.
Today’s episode covers the exciting topics of nipples, Lizzo, weed, and much more. And as any of us who have worked in policy would tell you, “it’s complicated.”
Let me know what you think (if you want to see more of these, or less) – this is an experiment. (You can reach me here on LinkedIn). — Alice Hunsberger
Links:
Pearlé’s newsletter
Lizzo talks about censorship and body shaming
Oversight board on nipples and nudity
Grindr’s Best Practices for Gender-Inclusive Content Moderation
TSPA curriculum: creating and enforcing policy
All Tech is Human - Tech Policy Hub
Credits:Hosted and edited by Alice Hunsberger
Produced by Talha Baig
Music by Zhao Shen
Special Thanks to Rachel, Sean, Cass and Sahar for their continued support - Mostrar más