Bölümler

  • This is a link post.

    I thought this was an interesting paper by Peter Salib and Simon Goldstein, and it reflects many of my thoughts about AI governance as well. Here's the abstract:

    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”—pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, the result could be violent and catastrophic conflict. Existing legal institutions are unprepared for the AGI world. New foundations for AGI governance are needed, and the time to begin laying them is now, before the critical [...]

    ---

    First published:
    August 3rd, 2024

    Source:
    https://forum.effectivealtruism.org/posts/jqpp6wp2dqDD7XkY3/ai-rights-for-human-safety

    ---

    Narrated by TYPE III AUDIO.

  • This is a famous Turkish poem by Nazım Hikmet. I just noticed its interesting overlap with some of the EA themes . Some here might find it motivating to read it. Translation by ChatGPT:

    On Living

    Living is no laughing matter:
    you must live with great seriousness
    like a squirrel, for example,
    I mean without looking for something beyond and above living,
    I mean living must be your whole occupation.
    Living is no laughing matter:
    you must take it seriously,
    so much so and to such a degree
    that, for example, your hands tied behind your back,
    your back to the wall,
    or else in a laboratory
    in your white coat and safety glasses,
    you can die for people—
    even for people whose faces you've never seen,
    even though you know living
    is the most real, the most beautiful thing.

    I mean, you must take living so seriously
    that [...]
















    ---

    First published:
    August 1st, 2024

    Source:
    https://forum.effectivealtruism.org/posts/EAgtojzSgzaoYvybs/on-living

    ---

    Narrated by TYPE III AUDIO.

  • Eksik bölüm mü var?

    Akışı yenilemek için buraya tıklayın.

  • Summary Summary .

    Farmed cows and pigs account for a tiny fraction of the disability of the farmed animals I analysed. The annual disability of farmed animals is much larger than that of humans, even under the arguably very optimistic assumption of all farmed animals having neutral lives. The annual funding helping farmed animals is much smaller than that helping humans.Introduction Introduction .

    I think one should decide on which areas and interventions to fund overwhelmingly based on (marginal) cost-effectiveness, as GiveWell does. Relatedly, I estimated corporate campaigns for chicken welfare, like the ones supported by The Humane League (THL), have a cost-effectiveness of 14.3 DALY/$, 1.44 k times that of GiveWell's top charities.

    However, for communication purposes, I believe it is fine to look into the benefits of fully solving a problem as well as philanthropic spending. Animal Charity Evaluators (ACE) has a great [...]





    ---

    Outline:

    (00:03) Summary

    (00:31) Introduction

    (01:26) Methods

    (05:30) Results

    (05:33) Disability per living time

    (05:40) Annual disability

    (05:43) World

    (05:49) China

    (05:55) Annual philanthropic spending

    (06:00) World

    (06:05) China

    (06:11) Discussion

    (07:18) Acknowledgements

    The original text contained 3 footnotes which were omitted from this narration.

    ---

    First published:
    June 24th, 2024

    Source:
    https://forum.effectivealtruism.org/posts/mgJvXgjd4EMkD5Eau/farmed-animals-are-neglected

    ---

    Narrated by TYPE III AUDIO.

  • This is a story of growing apart.

    I was excited when I first discovered Effective Altruism. A community that takes responsibility seriously, wants to help, and uses reason and science to do so efficiently. I saw impressive ideas and projects aimed at valuing, protecting, and improving the well-being of all living beings.

    Today, years later, that excitement has faded. Certainly, the dedicated people with great projects still exist, but they've become a less visible part of a community that has undergone notable changes in pursuit of ever-better, larger projects and greater impact:

    From concrete projects on optimal resource use and policy work for structural improvements, to avoiding existential risks, and finally to research projects aimed at studying the potentially enormous effects of possible technologies on hypothetical beings. This no longer appeals to me.

    Now I see a community whose commendable openness to unbiased discussion of any idea is being [...]

    ---

    First published:
    June 21st, 2024

    Source:
    https://forum.effectivealtruism.org/posts/5vTR7jHPob5gajTPj/why-i-m-leaving

    ---

    Narrated by TYPE III AUDIO.

  • LINK: https://everloved.com/life-of/steven-wise/obituary/

    Renowned animal rights pioneer Steven M. Wise passed away on February 15th after a long illness. He was 73 years old.

    An innovative scholar and groundbreaking expert on animal law, Wise founded and served as president of the Nonhuman Rights Project (NhRP), the only nonprofit organization in the US dedicated solely to establishing legal rights for nonhuman animals. As the NhRP's lead attorney, he filed historic lawsuits demanding the right to liberty of captive chimpanzees and elephants, achieving widely recognized legal firsts for his clients.

    Most notably, under Wise's leadership the NhRP filed a habeas corpus petition on behalf of Happy, an elephant held alone in captivity at the Bronx Zoo. Happy's case, which historian Jill Lepore has called “the most important animal-rights case of the 21st-century,” reached the New York Court of Appeals in 2022. The Court of Appeals then became the highest court of an [...]

    ---

    First published:
    February 21st, 2024

    Source:
    https://forum.effectivealtruism.org/posts/emDwE8KKqPCsXQJCt/in-memory-of-steven-m-wise

    ---

    Narrated by TYPE III AUDIO.

  • Joey Savoie recently wrote that AltruismSharpens Altruism:

    I think many EAs have a unique view about how one altruistic actionaffects the next altruistic action, something like altruism ispowerful in terms of its impact, and altruistic acts taketime/energy/willpower; thus, it's better to conserve your resourcesfor these topmost important altruistic actions (e.g., career choice)and not sweat it for the other actions.

    However, I think this is a pretty simplified and incorrect model thatleads to the wrong choices being taken. I wholeheartedly agree thatcertain actions constitute a huge % of your impact. In my case, I doexpect my career/job (currently running Charity Entrepreneurship) willbe more than 90% of my lifetime impact. But I have a different view onwhat this means for altruism outside of career choices. I think thatbeing altruistic in other actions not only does [...]

    ---

    First published:
    January 21st, 2024

    Source:
    https://forum.effectivealtruism.org/posts/8zzmZD5vsEkuDe24m/when-does-altruism-strengthen-altruism

    ---

    Narrated by TYPE III AUDIO.

  • Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we’re also looking for additional regrantors and donors to join.What is regranting?Regranting is a funding model where a donor delegates grantmaking budgets to different individuals known as “regrantors”. Regrantors are then empowered to make grant decisions based on the objectives of the original donor.This model was pioneered by the FTX Future Fund; in a 2022 retro they considered regranting to be very promising at finding new projects and people to fund. More recently, Will MacAskill cited regranting as one way to diversify EA funding.What is Manifund?Manifund is the charitable arm of Manifold Markets. Some of our past work:Impact certificates, with Astral Codex Ten and the OpenPhil AI Worldviews ContestForecasting tournaments, with Charity Entrepreneurship and Clearer ThinkingDonating prediction market winnings to charity, funded by the Future FundHow does regranting on Manifund work?Our website makes the process simple, transparent, and fast:A donor contributes money to Manifold for Charity, our registered 501c3 nonprofitThe donor then allocates the money between regrantors of their choice. They can increase budgets for regrantors doing a good job, or pick out new regrantors who share the donor’s values.Regrantors choose which opportunities (eg existing charities, new projects, or individuals) to spend their budgets on, writing up an explanation for each grant made.We expect most regrants to start with a conversation between the recipient and the regrantor, and after that, for the process to take less than two weeks.Alternatively, people looking for funding can post their project on the Manifund site. Donors and regrantors can then decide whether to fund it, similar to Kickstarter.The Manifund team screens the grant to make sure it is legitimate, legal, and aligned with our mission. If so, we approve the grant, which sends money to the recipient’s Manifund account.The recipient withdraws money from their Manifund account to be used for their project.Differences from the Future Fund’s regranting programAnyone can donate to regrantors. Part of what inspired us to start this program is how hard it is to figure out where to give as a longtermist donor—there’s no GiveWell, no ACE, just a mass of opaque, hard-to-evaluate research orgs. Manifund’s regranting infrastructure lets individual donors outsource their giving decisions to people they trust, who may be more specialized and more qualified at grantmaking.All grant information is public. This includes the identity of the regrantor and grant recipient, the project description, the grant size, and the regrantor’s writeup. We strongly believe in transparency as it allows for meaningful public feedback, accountability of decisions, and establishment of regrantor track records.Almost everything is done through our website. This lets us move faster, act transparently, set good defaults, and encourage discourse about the projects in comment sections.We recognize that not all grants are suited for publishing; for now, we recommend sensitive grants apply to other donors (such as LTFF, SFF, OpenPhil).We’re starting with less money. The Future [...]

    Source:
    https://forum.effectivealtruism.org/posts/RMXctNAksBgXgoszY/announcing-manifund-regrants

    Linkpost URL:
    https://manifund.org/rounds/regrants

    ---

    Narrated by TYPE III AUDIO.

    Share feedback on this narration.

  • Link-post for two pieces I just wrote on the Extropians.The Extropians were an online group of techno-optimist transhumanist libertarians active in the 90s who influence a lot of online intellectual culture today, especially in EA and Rationalism. Prominent members include Eliezer Yudkowsky, Nick Bostrom, Robin Hanson, Eric Drexler, Marvin Minsky and all three of the likely candidates for Satoshi Nakamoto (Hal Finney, Wei Dai, and Nick Szabo).The first piece is a deep dive into the archived Extropian forum. It was super fun to write and I was constantly surprised about how much of the modern discourse on AI and existential risk had already been covered in 1996.The second piece is a retrospective on predictions made by Extropians in 1995. Eric Drexler, Nick Szabo and 5 other Extropians give their best estimates for when we'll have indefinite biological lifespans and reproducing asteroid eaters.

    Source:
    https://forum.effectivealtruism.org/posts/FrukYXHBSaZvb7f9Q/a-double-feature-on-the-extropians

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • I have released a new episode of my podcast, EA Critiques, where I interview David Thorstad. David is a researcher at the Global Priorities Institute and also writes about EA on his blog, Reflective Altruism.In the interview we discuss three of his blog post series:Existential risk pessimism and the time of perils: Based on his academic paper of the same name, David argues that there is a surprising tension between the idea that there is a high probability of extinction (existential risk pessimism) and the idea that the expected value of the future, conditional on no existential catastrophe this century, is astronomically large.Exaggerating the risks: David argues that the probability of an existential catastrophe from any source is much lower than many EAs believe. At time of recording the series only covered risks from climate change, but future posts will make the same case for nuclear war, pandemics, and AI.Billionaire philanthropy: Finally, we talk about the the potential issues with billionaires using philanthropy to have an outsized influence, and how both democratic societies and the EA movement should respond.As always, I would love feedback, on this episode or the podcast in general, and guest suggestions. You can write a comment here, send me a message, or use this anonymous feedback form.

    Source:
    https://forum.effectivealtruism.org/posts/riYnjstGwK7hREfRo/podcast-interview-with-david-thorstad-on-existential-risk

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • The study was published in Nature on May 31st, 2023.Key Points:Cash transfer programs had the following observed effects:Deaths among women fell by 20%Largely driven by decreases in pregnancy-related deathsDeaths among children less than 5 fell by 8%No association between cash transfer programs and mortality among menTemporal analyses suggest reduction in mortality among men over time, and specific subgroup analysis (rather than population wide) found a 14% morality reduction among men aged 18-4037 low and middle income countries studied, population wide4,325,484 in the adult dataset2, 867,940 in the child dataset No apparent differences between the effects of unconditional and conditional cash transfersFactors that lead to larger reductions in mortality: Programs with higher coverage and larger cash transfer amounts Countries with higher regulatory quality ratingsCountries with lower health expenditures per capitastronger association in sub-Saharan Africa relative to outside sub-Saharan AfricaCitation: Richterman, A., Millien, C., Bair, E.F. et al. The effects of cash transfers on adult and child mortality in low- and middle-income countries. Nature (2023). https://doi.org/10.1038/s41586-023-06116-2

    Source:
    https://forum.effectivealtruism.org/posts/acBFLTsRw3fqa8WWr/large-study-examining-the-effects-of-cash-transfer-programs

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • IntroductionTo me, it is obvious that veganism introduces challenges to most people. Solving the challenges is possible for most but not all people, and often requires trade-offs that may or may not be worth it. I’ve seen effective altruist vegan advocates deny outright that trade-offs exist, or more often imply it while making technically true statements. This got to the point that a generation of EAs went vegan without health research, some of whom are already paying health costs for it, and I tentatively believe it’s harming animals as well. Discussions about the challenges of veganism and ensuing trade-offs tend to go poorly, but I think it’s too important to ignore. I’ve created this post so I can lay out my views as legibly as possible, and invite people to present evidence I’m wrong. One reason discussions about this tend to go so poorly is that the topic is so deeply emotionally and morally charged. Actually, it’s worse than that: it’s deeply emotionally and morally charged for one side in a conversation, and often a vague irritant to the other. Having your deepest moral convictions treated as an annoyance to others is an awful feeling, maybe worse than them having opposing but strong feelings. So I want to be clear that I respect both the belief that animals can suffer and the work advocates put into reducing that suffering. I don’t prioritize it as highly as you do, but I am glad you are doing (parts of) it.But it’s entirely possible for it to be simultaneously true that animal suffering is morally relevant, and veganism has trade-offs for most people. You can argue that the trade-offs don’t matter, that no cost would justify the consumption of animals, and I have a section discussing that, but even that wouldn’t mean the trade-offs don’t exist. This post covers a lot of ground, and is targeted at a fairly small audience. If you already agree with me I expect you can skip most of this, maybe check out the comments if you want the counter-evidence. I have a section addressing potential counter-arguments, and probably most people don’t need to read my response to arguments they didn’t plan on making. Because I expect modular reading, some pieces of information show up in more than one section. Anyone reading the piece end to end has my apologies for that. However, I expect the best arguments to come from people who have read the entire thing, and at a minimum the “my cruxes” and “evidence I’m looking for” sections. I also ask you to check the preemptive response section for your argument, and engage with my response if it relates to your point. I realize that’s a long read, but I’ve spent hundreds of hours on this, including providing nutritional services to veg*ns directly, so I feel like this is a reasonable request. My cruxesBelow are all of the cruxes I could identify for my conclusion that veganism has trade-offs, and they include [...]

    Source:
    https://forum.effectivealtruism.org/posts/3Lv4NyFm2aohRKJCH/change-my-mind-veganism-entails-trade-offs-and-health-is-one

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • This is partly based on my experiences working as a Program Officer leading Open Phil’s Longtermist EA Community Growth team, but it’s a hypothesis I have about how some longtermists could have more of an impact by their lights, not an official Open Phil position.Context: I originally wrote this in July 2022 as a memo for folks attending a retreat I was going to. I find that I refer to it pretty frequently and it seems relevant to ongoing discussions about how much meta effort done by EAs should focus on engaging more EAs vs. other non-EA people. I am publishing it with light-ish editing, and some parts are outdated, though for the most part I more strongly hold most of the conclusions than I did when I originally wrote it. Tl;dr: I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (“we” or “us” in this post, though I know it won’t apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the ‘most important century’ hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach. A lot of EAs who prioritize existential risk reduction are making increasingly awkward and convoluted rhetorical maneuvers to use “EAs” or “longtermists” as the main label for people we see as aligned with our goals and priorities. I suspect this is suboptimal and, in the long term, infeasible. In particular, I’m concerned that this is a reason we’re failing to attract and effectively welcome some people who could add a lot of value. The strongest counterargument I can think of right now is that I know of relatively few people who are doing full-time work on existential risk reduction on AI and biosecurity who have been drawn in by just the “existential risk reduction” frame [this seemed more true in 2022 than 2023]. This is in the vein of Neel Nanda’s "Simplify EA Pitches to "Holy Shit, X-Risk"" and Scott Alexander’s “Long-termism vs. Existential Risk”, but I want to focus more on the hope of attracting people to do priority work even if their motivations are neither longtermist nor neartermist EA, but instead mostly driven by reasons unrelated to EA. EA and longtermism: not a crux for doing the most important workRight now, my priority in my professional life is helping humanity navigate the imminent creation of potential transformative technologies, to try to make the future better for sentient beings than it would otherwise be. I think that’s likely the most important thing anyone can do these days. And I don’t think EA or longtermism is a crux for this prioritization anymore. A lot of us (EAs who currently prioritize x-risk reduction) were “EA-first” — we came to these goals first via broader EA principles and traits, like caring deeply about others; liking rigorous research, scope sensitivity, and expected value-based reasoning; and wanting to meet others with similar traits. Next, we were exposed to a cluster of philosophical and empirical arguments about [...]

    Source:
    https://forum.effectivealtruism.org/posts/cP7gkDFxgJqHDGdfJ/ea-and-longtermism-not-a-crux-for-saving-the-world

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • Apply to participate or facilitate, before 25th June 2023.We are looking for people who currently or might want to work in AI Governance and policy. If you have networks in or outside of EA who might be interested, we would appreciate you sharing this course with them.Full announcementThe AI Safety Fundamentals (AISF): Governance Course is designed to introduce the key ideas in AI Governance for reducing extreme risks from future AI systems.Alongside the course, you will be joining our AI Safety Fundamentals Community. The AISF Community is a space to discuss AI Safety with others who have the relevant skills and background to contribute to AI Governance, whilst growing your network and awareness of opportunities.The last time we ran the AI Governance course was in January 2022, then under Effective Altruism Cambridge. The course is now run by BlueDot Impact, founded by members of the same team (and now based in London).We are excited to relaunch the course now, when AI Governance is a focal point for the media and political figures. We feel this is a particularly important time to support high-fidelity discussion of ideas to govern the future of AI.Note we have also renamed the website from AGI Safety Fundamentals to AI Safety Fundamentals. We'll release another post within the next week to explain our reasoning, and we'll respond to any discussion about the rebrand there.Apply here, by 25th June 2023.Time commitmentThe course will run for 12 weeks from July-September 2023. It comprises 8 weeks of reading and virtual small-group discussions, followed by a 4-week project.The time commitment is around 5 hours per week, so you can engage with the course and community alongside full-time work or study. The split will be 2-3 hours of preparatory work, and a 1.5-2 hour live session.Course structureThe course is 12 weeks long and takes around 5 hours a week to participate in.For the first 8 weeks, participants will work through 2-3 hours of structured content to prepare for a weekly, facilitated small discussion group of 1.5-2 hours. Participants will be grouped depending on their current career stage and policy expertise. The facilitator will be knowledgeable about AI Governance, and can help to answer participants’ questions and point them to further resources.The final 4 weeks are project weeks. Participants can use this time to synthesise their views on the field and start thinking through how to put these ideas into practice, or start getting relevant skills and experience that will help them with the next step in their career.The course content is designed with input from a wide range of the community thinking about the governance of advanced AI. The curriculum will be updated before the course launches in mid-July.Target audienceWe think this course will particularly be able to help participants who:Have policy experience, and are keen to apply their skills to reducing risk from AI.Have a technical background, and want to learn about how they can use their skills to contribute to AI Governance.Are early in their career or a student who is interested in [...]

    Source:
    https://forum.effectivealtruism.org/posts/bsbf4am9paoTq8Lrb/applications-open-for-ai-safety-fundamentals-governance

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • I’m excited to share that Lincoln Quirk has joined the board of Effective Ventures Foundation (UK). This follows the addition of Zach Robinson and Eli Rose to the EV US board about two months ago.Lincoln is a co-founder and Head of Product at Wave, a technology company that aims to make finance more accessible in Africa through digital infrastructure. Wave spun off from SendWave, a remittance platform which Lincoln also cofounded and which was acquired in 2021. He has maintained a deep interest in effective altruism and been a part of the EA community for over a decade.In his own words, "I'm excited to join the EV UK board. I've been trying to help the world and have called myself part of the EA community for 10+ years; EV is playing one of the most important roles in this community and correspondingly holds a lot of responsibility. I'm looking forward to helping figure out the best ways EV can contribute to making the world a better place through enabling EA community and EA projects.”The EV UK trustees and I are excited to have Lincoln join and look forward to working with him. Lincoln impressed us during the interview process with his strategic insight and dedication to the role. I also think his experience founding and growing Wave and Sendwave will be a particularly useful perspective to add to the board.We are continuing to look for candidates to add to the boards of both EV UK and EV US, especially candidates with diverse backgrounds and experiences, and who have experience in accounting, law, finance, risk management or other management at large organizations. We recruited Lincoln via directly reaching out to him, and plan to continue to source candidates this way and via our open call. If you are interested or know of a great candidate, the linked forum post includes instructions for applying or nominating someone. Applications and nominations will be accepted until June 4th.

    Source:
    https://forum.effectivealtruism.org/posts/NK5mDoedYeuorhkAf/lincoln-quirk-has-joined-the-ev-uk-board

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • From late 2020 to last month, I worked at grassroots-level non-profits in operational roles. Over that time, I’ve seen surprisingly effective deployments of strategies that were counter-intuitive to my EA and rationalist sensibilities.I spent 6 months being the on-shift operations manager at one of the five largest food banks in Toronto (~50 staff/volunteers), and 2 years doing logistics work at Samaritans (fake name), a long-lived charity that was so multi-armed that it was basically operating as a supplementary social services department for the city it was in(~200 staff and 200 volunteers). Both of these non-profits were well-run, though both dealt with the traditional non-profit double whammy of being underfunded and understaffed.Neither place was super open to many EA concepts (explicit cost-benefit analyses, the ITN framework, geographic impartiality, the general sense that talent was the constraining factor instead of money, etc). Samaritans in particular is a spectacular non-profit, despite(?) having basically anti-EA philosophies, such as:Being very localist; Samaritans was established to help residents of the city it was founded in, and now very specialized in doing that.Adherence to faith; the philosophy of The Catholic Worker Movement continues to inform the operating choices of Samaritans to this day.A big streak of techno-pessimism; technology is first and foremost seen as a source of exploitation and alienation, and adopted only with great reluctance when necessary.Not treating money as fungible. The majority of funding came from grants or donations tied to specific projects or outcomes. (This is a system that the vast majority of nonprofits operate in.)Once early on I gently pushed them towards applying to some EA grants for some of their more EA-aligned work, and they were immediately turned off by the general vibes of EA upon visiting some of its websites. I think the term “borg-like” was used.Over this post, I’ll be largely focusing on Samaritans as I’ve worked there longer and in a more central role, and it’s also a more interesting case study due to its stronger anti-EA sentiment.Things I LearnedLong Term Reputation is PricelessNon-Profits Shouldn’t Be IslandsSlack is Incredibly PowerfulHospitality is Pretty ImportantFor each learning, I have a section for sketches for EA integration – I hesitate to call them anything as strong as recommendations, because the point is to give more concrete examples of what it could look like integrated in an EA framework, rather than saying that it’s the correct way forward.1. Long Term Reputation is PricelessInstitutional trust unlocks a stupid amount of value, and you can’t buy it with money. Lots of resources (amenity rentals; the mayor’s endorsement; business services; pro-bono and monetary donations) are priced/offered based on tail risk. If you can establish that you’re not a risk by having a longstanding, unblemished reputation, costs go way down for you, and opportunities way up. This is the world that Samaritans now operate in.Samaritans had a much better, easier time at city hall compared to newer organizations, because of a decades-long productive relationship where we were really helpful with issues surrounding unemployment and homelessness. Permits get [...]

    Source:
    https://forum.effectivealtruism.org/posts/5oTr4ExwpvhjrSgFi/things-i-learned-by-spending-five-thousand-hours-in-non-ea

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • tl;dr: create an equivalent of GWWC for building career capital. We've thought about this idea for ~15 minutes and are unlikely to do something ourselves, but wanted to share it because we think it might be good.Many people's greatest path to impact is through changing their careerBut for a lot of these people, particularly those earlier in their career, it doesn't make sense to immediately apply to impact-oriented jobs. Instead, it's better for them to build career capital at non-impact-oriented workplaces, i.e. "earning to learn"It would be nice if there was some equivalent of the Giving What We Can pledge for thisIt could involve something like pledging to:Spend at least one day per year updating your career plan with an eye towards impactApply to at least x impact-oriented jobs per year, even if you expect to get rejectedAnd some sort of dashboard checking people's adherence to this, and nudging them to adhere betterSome potential benefits:Many people who have vague plans of "earning to learn" just end up drifting away after entering the mainstream workforce; this can help them stay engagedIt might relieve some of the pressure around being rejected from "EA jobs" – making clear that Official Fancy EA People endorse career paths beyond "work at one of this small list of organizations" puts less pressure on people who aren't a good fit for one of those small list of organizationsRelatedly, it gives community builders a thing to suggest to a relatively broad set of community members which is robustly goodNext steps:I think the MVP here requires ~0 technology: come up with the pledge, get feedback on it, and if people are excited throw it into a Google formIt's probably worth reading criticisms of the GWWC pledge (e.g. this) to understand some of the failure modes here and be sure you avoid thoseIt also requires thinking through some of the risks, e.g. you might not want a fully public pledge since that could hurt people's job prospectsIf you are interested in taking on this project, please contact one of us and we can try to help

    Source:
    https://forum.effectivealtruism.org/posts/qnYm5MtBJcKyvYvfo/an-earn-to-learn-pledge

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • TL;DREverything that looks like exponential growth eventually runs into limits and slows down. AI will quite soon run into limits of compute, algorithms, data, scientific progress, and predictability of our world. This reduces the perceived risk posed by AI and gives us more time to adapt.DisclaimerAlthough I have a PhD in Computational Neuroscience, my experience with AI alignment is quite low. I haven’t engaged in the field much except for reading Superintelligence and listening to the 80k Hours podcast. Therefore, I may duplicate or overlook arguments obvious to the field or use the wrong terminology.IntroductionMany arguments I have heard around the risks from AI go a bit like this: We will build an AI that will be as smart as humans, then that AI will be able to improve itself. The slightly better AI will again improve itself in a dangerous feedback loop and exponential growth will ultimately create an AI superintelligence that has a high risk of killing us all.While I do recognize the other possible dangers of AI, such as engineering pathogens, manipulating media, or replacing human relationships, I will focus on that dangerous feedback loop, or “exponential AI takeoff”. There are, of course, also risks from human-level-or-slightly-smarter systems, but I believe that the much larger, much less controllable risk would come from “superintelligent'' systems. I’m arguing here that the probability of creating such systems via an “exponential takeoff” is very low.Nothing grows exponentially indefinitelyThis might be obvious, but let’s start here: Nothing grows exponentially indefinitely. The textbook example for exponential growth is the growth of bacteria cultures. They grow exponentially until they hit the side of their petri dish, and then it’s over. If they’re not in a lab, they grow exponentially until they hit some other constraint but in the end, all exponential growth is constrained. If you’re lucky, actual growth will look logistic (”S-shaped”), where the growth rate approaches 0 as resources are eaten up. If you’re unlucky, the population implodes.For the last decades, we have seen things growing and growing without limit, but we’re slowly seeing a change. Human population is starting to follow an S-curve, the number of scientific papers has been growing fast but is starting to flatten out, and even Silicon Valley has learnt that Metcalfe’s Law of exponential network benefits doesn’t work due to the limits imposed by network complexity.I am assuming that everybody will agree with the general argument above, but the relevant question is: When will we see the “flattening” of the curve for AI? Yes, eventually growth is limited, but if that limit kicks in once AI has used up all the resources of our universe, that’s a bit too late for us. I believe that the limits will kick in as soon as AI will reach our level of knowledge, give or take a magnitude, and here is why:We’re reaching the limits of Moore’s lawFirst and foremost, the growth of processing power is what enabled the growth of AI. I’m not going to guess when we reach [...]

    Source:
    https://forum.effectivealtruism.org/posts/wY6aBzcXtSprmDhFN/exponential-ai-takeoff-is-a-myth

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • 0. IntroductionAn essay being worth $50,000 dollars is a bold claim, so here is another— the person best equipped to adjust your AI existential risk predictions is the 18th-century Scottish historian David Hume.Eliezer Yudkowsky has used Hume’s is-ought problem to argue that it’s possible, in principle, for powerful AI agents to have any goals. Even a system with cognitive mastery over “is” data requires a framework for “ought” drives to be defined, as the transition from factual knowledge to value-driven objectives is not inherently determined. Bluntly, Eliezer writes that “the world is literally going to be destroyed because people don't understand Hume's is-ought divide. Philosophers, you had ONE JOB.”As a student of Hume, I think this is a limited picture of what he has to offer this conversation. My goal, however, is not to refute Eliezer’s example or to even engage in x-risk debates at the level of a priori reasoning. Hume’s broader epistemological and historical writing critiques this method. Hume was not a lowercase-r rationalist. He thought knowledge was derived from experiences and warned that it’s easy to lead yourself astray with plausible sounding abstractions. If you are a capital-R Rationalist, I will argue you should review your foundational epistemological assumptions, because even a small update may ripple out to significantly greater uncertainty about existential risk from AI.The central premises in Hume’s thought that I think you should consider are:All knowledge is derived from impressions of the external world. Our ability to reason is limited, particularly about ideas of cause and effect with limited empirical experience.History shows that societies develop in an emergent process, evolving like an organism into an unknown and unknowable future. History was shaped less by far-seeing individuals informed by reason than by contexts which were far too complex to realize at the time.In this essay, I will argue that these premises are true, or at least truer than the average person concerned about existential risk from AI holds them to be. I hope David Hume can serve as a guide to the limits of “arguing yourself” into any strong view of the future based on a priori reasoning. These premises do not mean that AI safety should be ignored, but they should unsettle strong/certain views.The best practical example of premise #1 is Anthropic’s description of “empiricism in AI safety.” Anthropic does argue that there is evidence AI will have a large impact, that we do not know how to train systems to robustly behave well, and that “some scary, speculative problems might only crop up” once AI systems are very advanced. Yet they caution that “the space of possible AI systems, possible safety failures, and possible safety techniques is large and difficult to traverse from the armchair alone.” Anthropic is committed to AI safety, but within an empiricist epistemology. Their portfolio approach is built upon uncertainty: “Some researchers who care about safety are motivated by a strong opinion on the nature of AI risks. Our experience is that even predicting the behavior and properties of AI systems in the near future is very difficult. Making [...]

    Source:
    https://forum.effectivealtruism.org/posts/MDkYSuCzFbEgGgtAd/ai-doom-and-david-hume-a-defence-of-empiricism-in-ai-safety

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • Big mistakes = Doing something that is actively harmful or useless by their own lights and values, i.e. doesn't help them achieve their life goals. (Not: Doing something that isn't in line with your values and goals.)A lot of people think that others in the EA-ish community are trying to do something impactful but end up doing something harmful or useless. Sometimes they also work on something that they are just not very good at or make other big mistakes. A lot of people never end up telling the other person that they think they are making big mistakes. Sometimes people also just have one particular argument for why the other might do harmful or useless work but not be sure whether it's a bad overall. This also often goes unsaid.I think that's understandable and also bad or at least very costly.Epistemic status: Speculation/rant. I know of another person who might post something in this topic that is much more rigorous and has actual background research.Upsides of telling others you think they are making big mistakes, wasting their time, or doing harm:It's good on a community level because people get information that's useful to decide how to achieve their goals (among them, having impact,) so people end up working on less suboptimal things and the community has better impact overall.It's good on a community level because it's pushes towards good intellectual conversations and progress.I and probably others find it stressful because I can't rely on others telling me if they think I'm doing a bad job, so I have to try to read between the lines. (I find it much less stressful now but when I was more insecure about my competence, I found it really stressful. I think one of my main concerns was others thinking and saying I'm "meh" or "fine" (with an unenthusiastic tone) but only behind my back.)Note that anxiety works differently for different people though and some people might find the opposite is true for them. See reasons against telling people that you think they are wasting their time or worse.I and probably others find it pretty upsetting that I can't rely on others being honest with me. It's valuable information and I would like people to act in a way that helps me achieve my stated goals (in this case, doing good), especially if their motivation for not being honest with me is protecting my wellbeing.That said, I often don't do a great job at this myself and think telling others you think their efforts would be better spent elsewhere also has significant costs, both personal and on a community level.Downsides of telling others you think they are making big mistakes, wasting their time, or doing harm:Hearing that somebody thinks you're doing harmful or useless work can be extremely discouraging and can lead people to over-update, especially if they are insecure anyway. (Possibly because people do it so rarely, so the signal can be interpreted as stronger than it's intended.)At the same time, we [...]

    Source:
    https://forum.effectivealtruism.org/posts/fMCnMCMSEjanhAwpM/probably-tell-your-friends-when-they-make-big-mistakes

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.

  • The Global Innovation Fund (GIF) is a non-profit, impact-first investment fund headquartered in London that primarily works with mission-aligned development agencies (USAID, SIDA, Global Affairs Canada, UKAID). Through grants, loans and equity investments, they back innovations with the potential for social impact at a large scale, whether these are new technologies, business models, policy practices or behavioural insights.Recently, they made a bold but little publicized projection in their 2022 Impact Report (page 18): "We project every dollar that GIF has invested to date will be three times as impactful as if that dollar had been spent on long-lasting, insecticide-treated bednets... This is three times higher than the impact per dollar of Givewell’s top-rated charities, including distribution of anti-malarial insecticide-treated bednets. By Givewell’s estimation, their top charities are 10 times as cost-effective as cash transfers."This is a short post to highlight GIF's projection to the EA community and to invite comments and reactions.Here are a few initial points:It's exciting to see an organization with relatively traditional funders comparing its impact to GiveWell's top charities (as well as cash transfers).I would want to see more information on how they did their calculations before taking a view on their projection.In any case, based on my conversations with GIF, and what I've understood about their methodology, I think their projection should be taken seriously. I can see ways it could be either an overestimate or an underestimate.

    Source:
    https://forum.effectivealtruism.org/posts/zuoYcZJh5FEywpvzA/global-innovation-fund-projects-its-impact-to-be-3x-givewell

    Did you find this narration helpful? How could it be improved?
    As an experiment, we're releasing AI narrations of all new posts with >125 karma on the "EA Forum (All audio)" podcast. Please share your thoughts and/or report bugs in the narration.

    Narrated by TYPE III AUDIO.