muckrAIkers

De : Jacob Haimes and Igor Krawczuk
  • Résumé

  • Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.
    © Kairos.fm
    Afficher plus Afficher moins
Les membres Amazon Prime bénéficient automatiquement de 2 livres audio offerts chez Audible.

Vous êtes membre Amazon Prime ?

Bénéficiez automatiquement de 2 livres audio offerts.
Bonne écoute !
    Épisodes
    • US National Security Memorandum on AI, Oct 2024
      Nov 6 2024

      October 2024 saw a National Security Memorandum and US framework for using AI in national security contexts. We go through the content so you don't have to, pull out the important bits, and summarize our main takeaways.

      • (00:48) - The memorandum
      • (06:28) - What the press is saying
      • (10:39) - What's in the text
      • (13:48) - Potential harms
      • (17:32) - Miscellaneous notable stuff
      • (31:11) - What's the US governments take on AI?
      • (45:45) - The civil side - comments on reporting
      • (49:31) - The commenters
      • (01:07:33) - Our final hero
      • (01:10:46) - The muck


      Links
      • United States National Security Memorandum on AI
      • Fact Sheet on the National Security Memorandum
      • Framework to Advance AI Governance and Risk Management in National Security

      Related Media

      • CAIS Newsletter - AI Safety Newsletter #43
      • NIST report - Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
      • ACLU press release - ACLU Warns that Biden-Harris Administration Rules on AI in National Security Lack Key Protections
      • Wikipedia article - Presidential Memorandum
      • Reuters article - White House presses gov't AI use with eye on security, guardrails
      • Forbes article - America’s AI Security Strategy Acknowledges There’s No Stopping AI
      • DefenseScoop article - New White House directive prods DOD, intelligence agencies to move faster adopting AI capabilities
      • NYTimes article - Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools
      • Forbes article - 5 Things To Know About The New National Security Memorandum On AI – And What ChatGPT Thinks
      • Federal News Network interview - A look inside the latest White House artificial intelligence memo
      • Govtech article - Reactions Mostly Positive to National Security AI Memo
      • The Information article - Biden Memo Encourages Military Use of AI

      Other Sources

      • Physical Intelligence press release - π0: Our First Generalist Policy
      • OpenAI press release - Introducing ChatGPT Search
      • WhoPoo App!!
      Afficher plus Afficher moins
      1 h et 16 min
    • Understanding Claude 3.5 Sonnet (New)
      Oct 30 2024

      Frontier developers continue their war on sane versioning schema to bring us Claude 3.5 Sonnet (New), along with "computer use" capabilities. We discuss not only the new model, but also why Anthropic may have released this model and tool combination now.


      • (00:00) - Intro
      • (00:22) - Hot off the press
      • (05:03) - Claude 3.5 Sonnet (New) Two 'o' 3000
      • (09:23) - Breaking down "computer use"
      • (13:16) - Our understanding
      • (16:03) - Diverging business models
      • (32:07) - Why has Anthropic chosen this strategy?
      • (43:14) - Changing the frame
      • (48:00) - Polishing the lily

      Links

      • Anthropic press release - Introducing Claude 3.5 Sonnet (New)
      • Model Card Addendum

      Other Anthropic Relevant Media

      • Paper - Sabotage Evaluations for Frontier Models
      • Anthropic press release - Anthropic's Updated RSP
      • Alignment Forum blogpost - Anthropic's Updated RSP
      • Tweet - Response to scare regarding Anthropic training on user data
      • Anthropic press release - Developing a computer use model
      • Simon Willison article - Initial explorations of Anthropic’s new Computer Use capability
      • Tweet - ARC Prize performance
      • The Information article - Anthropic Has Floated $40 Billion Valuation in Funding Talks

      Other Sources

      • LWN.net article - OSI readies controversial Open AI definition
      • National Security Memorandum
      • Framework to Advance AI Governance and Risk Management in National Security
      • Reuters article - Mother sues AI chatbot company Character.AI, Google over son's suicide
      • Medium article - A Small Step Towards Reproducing OpenAI o1: Progress Report on the Steiner Open Source Models
      • The Guardian article - Google's solution to accidental algorithmic racism: ban gorillas
      • TIME article - Ethical AI Isn’t to Blame for Google’s Gemini Debacle
      • Latacora article - The SOC2 Starting Seven
      • Grandview Research market trends - Robotic Process Automation Market Trends
      Afficher plus Afficher moins
      1 h et 1 min
    • Winter is Coming for OpenAI
      Oct 22 2024

      Brace yourselves, winter is coming for OpenAI - atleast, that's what we think. In this episode we look at OpenAI's recent massive funding round and ask "why would anyone want to fund a company that is set to lose net 5 billion USD for 2024?" We scrape through a whole lot of muck to find the meaningful signals in all this news, and there is a lot of it, so get ready!


      • (00:00) - Intro
      • (00:28) - Hot off the press
      • (02:43) - Why listen?
      • (06:07) - Why might VCs invest?
      • (15:52) - What are people saying
      • (23:10) - How *is* OpenAI making money?
      • (28:18) - Is AI hype dying?
      • (41:08) - Why might big companies invest?
      • (48:47) - Concrete impacts of AI
      • (52:37) - Outcome 1: OpenAI as a commodity
      • (01:04:02) - Outcome 2: AGI
      • (01:04:42) - Outcome 3: best plausible case
      • (01:07:53) - Outcome 1*: many ways to bust
      • (01:10:51) - Outcome 4+: shock factor
      • (01:12:51) - What's the muck
      • (01:21:17) - Extended outro

      Links

      • Reuters article - OpenAI closes $6.6 billion funding haul with investment from Microsoft and Nvidia
      • Goldman Sachs report - GenAI: Too Much Spend, Too Little Benefit
      • Apricitas Economics article - The AI Investment Boom
      • Discussion of "The AI Investment Boom" on YCombinator
      • State of AI in 13 Charts
      • Fortune article - OpenAI sees $5 billion loss in 2024 and soaring sales as big ChatGPT fee hikes planned, report says

      More on AI Hype (Dying)

      • Latent Space article - The Winds of AI Winter
      • Article by Gary Marcus - The Great AI Retrenchment has Begun
      • TimmermanReport article - AI: If Not Now, When? No, Really - When?
      • MIT News article - Who Will Benefit from AI?
      • Washington Post article - The AI Hype bubble is deflating. Now comes the hard part.
      • Andreesen Horowitz article - Why AI Will Save the World

      Other Sources

      • Human-Centered Artificial Intelligence Foundation Model Transparency Index
      • Cointelegraph article - Europe gathers global experts to draft ‘Code of Practice’ for AI
      • Reuters article - Microsoft's VP of GenAI research to join OpenAI
      • Twitter post from Tim Brooks on joining DeepMind
      • Edward Zitron article - The Man Who Killed Google Search
      Afficher plus Afficher moins
      1 h et 23 min

    Ce que les auditeurs disent de muckrAIkers

    Moyenne des évaluations utilisateurs. Seuls les utilisateurs ayant écouté le titre peuvent laisser une évaluation.

    Commentaires - Veuillez sélectionner les onglets ci-dessous pour changer la provenance des commentaires.

    Il n'y a pas encore de critique disponible pour ce titre.