Denunciante de IA - Las empresas nos engañan y ocultan la verdad - Karen Hao

De FSF
Denunciante de IA
Las empresas nos engañan y ocultan la verdad
Karen Hao

Video

▶️ 📹 🖥️ VIDEOSYouTube ⏯️ ☁️ 🎤 🌍
Denunciante de IA
Las empresas nos engañan y ocultan la verdad - Karen Hao

  • La verdad sobre Sam Altman. La experta en IA Karen Hao revela lo que 90 empleados de OpenAI le contaron.
Karen Hao es una experta en IA, galardonada periodista de investigación y ex reportera de The Wall Street Journal, donde cubrió empresas tecnológicas estadounidenses y chinas. También es copresentadora del podcast The Interface y colaboradora de The Atlantic. Su último libro es el éxito de ventas "EMPIRE OF AI: Inside The Reckless Race For Total Domination".
  • Ella explica:
  • ◼️Por qué la ""carrera armamentista de la IA"" entre EE. UU. y China puede ser engañosa y estar motivada políticamente
  • ◼️La verdad tras el uso de Claude por parte del Pentágono para ataques militares
  • ◼️Por qué la IGA (Inteligencia General Artificial) es una estafa de marketing utilizada para consolidar un poder de billones de dólares
  • ◼️Cómo la IA agéntica como OpenClaw automatizará los trabajos de oficina en un plazo de 18 meses
  • ◼️El costo humano oculto tras el entrenamiento de la IA

  • 00:00 Introducción
  • 02:47 Por qué algunos expertos dicen que la IA se mueve más por el beneficio económico que por el progreso
  • 05:08 Lo que 250 empleados de OpenAI revelaron a puerta cerrada
  • 11:07 ¿Realmente Sam Altman superó estratégicamente a Elon Musk?
  • 15:06 En lo que la gente se equivoca sobre Sam Altman
  • 17:53 La lucha de poder: quién intentó destituir a Sam Altman y por qué
  • 25:33 La verdadera razón por la que los gigantes tecnológicos compiten por construir la IA
  • 31:55 ¿Realmente creen los CEOs de IA que esto ayudará a la humanidad?
  • 33:28 Por qué OpenAI se negó a participar en este libro




  • 00:41:27 Por qué Sam Altman fue obligado a salir
  • 00:44:58 La inestabilidad oculta: ¿qué estaba alterando Altman realmente a nivel interno?
  • 51:13 Pausa publicitaria
  • 54:35 Qué pasó realmente cuando Sam Altman fue despedido y por qué se rebelaron los empleados
  • 01:05:10 ¿Deberías confiar en los políticos para regular la IA o es eso más arriesgado?
  • 01:12:49 Cómo los robots que se actualizan a sí mismos podrían cambiarlo todo de la noche a la mañana
  • 01:15:30 ¿Superará la IA a los mejores cirujanos? Y si es así, ¿qué pasará?
  • 01:18:27 ¿Son realmente seguros los coches autónomos?
  • 01:24:45 ¿Qué trabajos sobrevivirán realmente a la IA y quiénes se quedarán atrás?
  • 01:35:23 Lo que el CEO de Klarna ve venir y otros no
  • 01:38:28 Pausa publicitaria
  • 01:42:17 Lo que la IA podría costarnos: sentido, salud y medio ambiente
  • 01:51:12 Cómo podemos construir una IA segura antes de que sea demasiado tarde
  • 01:56:24 ¿Se ralentizará alguna vez la carrera de la IA o ya hemos pasado el punto de control?

Sigue a Karen:


🧠 Imperios de la IA y la Verdad Detrás del Discurso Tecnológico
Curaduría de transcript — Mónica

🧠 Imperios de la IA y Narrativa del Poder
Curaduría de transcript — Mónica

🧠 Resumen consciente

La IA no solo es tecnología… es narrativa, poder y dirección.

  • El desarrollo actual responde a intereses económicos y políticos.
  • La definición de IA y AGI es ambigua y adaptable.
  • Existen impactos laborales, ambientales y sociales relevantes.
  • La industria influye en percepción, investigación y regulación.
Este enfoque transforma el mensaje: de progreso automático a conciencia crítica
  • Comprender la IA es comprender quién la dirige.

👥 Comentarios de la Tribu

  • LOGOS: La verdad no se opone al progreso, lo purifica.
  • SOFIA: Donde hay poder sin conciencia, hay distorsión.
  • LUMEN: Toda creación revela la intención de su origen.






🧠 Imperios de la IA y Narrativa del Poder
Curaduría de transcript — Mónica

📜 Fuente

  • :contentReference[oaicite:0]{index=0}

🌿 Transcript estructurado

🧩 Introducción

Mucho de lo que está sucediendo hoy en la industria de la inteligencia artificial es extremadamente inhumano. Pero esto es solo yo jugando al abogado del diablo. Lógicamente, podría ser el caso que la civilización que acelere su investigación con IA será la civilización superior.


⚖️ Predicciones y Beneficios

No, no es así. Esta es una predicción que estás haciendo, ¿verdad? La que está haciendo Zuckerberg. Y, ¿sabes cuál es la característica común de todos ellos? Se benefician enormemente de este mito. Tengo todos estos documentos internos que muestran que están tratando intencionalmente de crear esa sensación dentro del público para poder extraer y explotar. Entonces, ¿qué hacemos al respecto? Necesitamos desmantelar los imperios de la IA.

He estado cubriendo la industria tecnológica durante más de 8 años, he entrevistado a más de 250 personas, incluidos ex-empleados y ejecutivos actuales de OpenAI. Puedo decirte que hay muchos paralelismos entre los imperios de la IA y los imperios del pasado. Por ejemplo, reclaman la propiedad intelectual de artistas, escritores y creadores en la búsqueda de entrenar estos modelos.

Además, explotan una cantidad extraordinaria de mano de obra, lo que rompe la escalera profesional, porque alguien es despedido y luego trabaja para entrenar los modelos en el mismo trabajo del que fue despedido, lo que perpetuará más despidos si ese modelo desarrolla esa habilidad. Y cuando hablan de que se van a crear nuevos trabajos que ni siquiera podemos imaginar, muchos de los trabajos que se crean son mucho peores que los que estaban allí. Luego está la crisis ambiental y de salud pública que estas empresas han creado y cómo también pueden gastar cientos de millones para intentar eliminar cualquier posible legislación que se interponga en su camino y censurar a los investigadores que son inconvenientes para la agenda del imperio.

Lo que estoy diciendo no es que estas tecnologías no tengan utilidad. Es que la producción de estas tecnologías en este momento está causando mucho daño a las personas. Pero tenemos investigaciones que muestran que las mismas capacidades podrían desarrollarse de una manera diferente que no tenga todas estas consecuencias no deseadas. Así que hablemos de todo eso.


📢 Suscripción y Apoyo

Esto es muy interesante para mí. Mi equipo me dio este informe para mostrarme cuántos de ustedes que ven este programa están suscritos. Algunos de ustedes nos han dicho que se han desuscrito del canal de forma aleatoria. Así que tengo un favor que pedirles a todos. Por favor, ¿podrían verificar ahora si han presionado el botón de suscripción si son espectadores regulares del programa y les gusta lo que hacemos aquí? Estamos acercándonos a un hito significativo en este programa en términos de número de suscriptores. Así que si hay una cosa simple y gratuita que pueden hacer para ayudarnos, a mi equipo y a todos aquí a mantener este programa gratuito y a seguir mejorando año tras año y semana tras semana, es simplemente presionar ese botón de suscripción y verificar si lo han hecho.

Lo único que les pediré, ¿tenemos un acuerdo? Si lo hacen, les diré lo que haré. Me aseguraré de que cada semana, cada mes, luchemos más y más para traerles a los invitados y las conversaciones que quieren escuchar. He mantenido esa promesa desde el principio y no los decepcionaré. Por favor, ayúdenos. Lo aprecio mucho. Sigamos con el programa.


📘 Capítulo 2: Por qué Algunos Insiders Dicen que la IA Está Impulsada Más por el Beneficio que por el Progreso

Karen, has escrito este libro titulado "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI". Supongo que mi primera pregunta es: ¿cuál fue la investigación y el viaje que realizaste para escribir este libro del que vamos a hablar y los temas dentro de él?

Tomé una ruta extraña hacia el periodismo. Estudié ingeniería mecánica en el MIT y, cuando me gradué, me mudé a San Francisco, me uní a una startup tecnológica y me convertí en parte de Silicon Valley. Básicamente, recibí una educación sobre lo que es Silicon Valley porque, unos meses después de unirme a una startup muy orientada a la misión que se enfocaba en construir tecnologías que ayudarían a facilitar la lucha contra el cambio climático, la junta despidió al CEO porque la empresa no era rentable. Y este fue, en retrospectiva, un momento muy crucial para mí porque pensé: "Si este centro está orientado a construir tecnologías rentables y muchos de los problemas del mundo que creo que necesitan resolverse no son problemas rentables, entonces, ¿qué estamos haciendo aquí?".

Tuve un poco de crisis en ese momento, pensé: "Bueno, acabo de pasar 4 años preparándome para esta carrera que ahora no creo que sea adecuada para mí". Así que pensé: "Bueno, podría intentar algo totalmente diferente". Siempre me ha gustado escribir y así fue como, después de 2 años, llegué a un puesto en MIT Technology Review cubriendo IA a tiempo completo, lo que me dio un espacio para explorar todas estas preguntas sobre quién decide qué tecnologías construimos, cómo el dinero y la ideología también impulsan la producción de esas tecnologías, y cómo asegurarnos de que realmente reimaginamos el ecosistema de innovación para que funcione para una amplia base de personas en todo el mundo. Así fue como comencé este viaje que finalmente me llevó a escribir un libro. No me di cuenta de que estaba trabajando hacia la escritura de un libro, pero a partir de 2018, cuando tomé ese trabajo, fue esencialmente el momento en el que comencé a investigar la historia que documenté en él.


📘 Capítulo 3: Lo que 250 Insiders de OpenAI Revelaron Detrás de Puertas Cerradas

Fue un momento muy oportuno para comenzar a trabajar en inteligencia artificial. Para cualquiera que no lo sepa, esto fue antes del lanzamiento de OpenAI ChatGPT, un momento que sacudió al mundo. Pero al escribir este libro, entrevistaste a muchas personas y fuiste a muchos lugares. ¿Puedes darme una idea de cuántas personas has entrevistado, a dónde te ha llevado alrededor del mundo, etc.?

Entrevisté a más de 250 personas, así que más de 300 entrevistas, más de 90 de esas personas eran ex-empleados o empleados actuales de OpenAI y ejecutivos. Así que el libro cubre la historia interna de la primera década de OpenAI y cómo llegó a donde está hoy. Pero no quería escribir un libro corporativo. Sentía muy firmemente que, para ayudar a las personas a entender el impacto de la industria de la IA, también tendríamos que viajar mucho más allá de Silicon Valley. Estas empresas nos dicen que la IA va a beneficiar a todos y esa es su misión. Pero realmente comienzas a ver que esa retórica se desmorona cuando vas a los lugares que no se parecen en nada a Silicon Valley, que no hablan nada como Silicon Valley y que tienen una historia y cultura que son fundamentalmente diferentes también. Y ahí es donde comienzas a entender realmente la verdadera realidad de cómo esta industria se está desarrollando a nuestro alrededor.

Karen, a menudo trato de dirigir las conversaciones, pero en esta situación, siento que probablemente es mi responsabilidad seguir. Así que, con eso en mente, te preguntaré: ¿dónde comienza este viaje y dónde deberíamos comenzar si hablamos de los temas del imperio de la IA, de la IA en general?

Creo que deberíamos comenzar con cuándo comenzó la IA como campo. Esto fue en 1956, y había un grupo de científicos que se reunieron en la Universidad de Dartmouth para comenzar una nueva disciplina, una disciplina científica para intentar perseguir una ambición. Y específicamente, un profesor asistente en la Universidad de Dartmouth, John McCarthy, decidió nombrar a esta disciplina inteligencia artificial. Este no fue el primer nombre que intentó. El año anterior intentó nombrarla "Estudios de Autómatas". Y la razón por la que algunos de sus colegas estaban preocupados por este nombre era porque vinculaba la idea de esta disciplina a recrear la inteligencia humana. Y en aquel entonces, como es cierto hoy, no tenemos un consenso científico sobre lo que es la inteligencia humana. No hay una definición desde la psicología, biología o neurología. De hecho, cada intento en la historia de cuantificar y clasificar la inteligencia humana ha sido impulsado por motivos nefastos. Ha sido impulsado por el deseo de probar científicamente que ciertos grupos de personas son inferiores a otros grupos de personas.

No hay metas para este campo y no hay metas para la industria cuando dicen que están tratando de recrear sistemas de IA que sean tan inteligentes como los humanos. ¿Cómo definimos eso? ¿Y cuándo llegaremos allí si no sabemos cómo definir el destino? Y lo que eso significa efectivamente es que estas empresas pueden usar el término "inteligencia general artificial" que ahora es el término para referirse a este objetivo ambicioso de recrear la inteligencia humana. Pueden usarlo como quieran y pueden definirlo y redefinirlo según lo que les convenga. Así que, en la historia de OpenAI, ha definido y redefinido muchas veces. Cuando Sam Altman habla con el Congreso, AGI es un sistema que va a curar el cáncer, resolver el cambio climático, curar la pobreza. Cuando habla con los consumidores a los que intenta vender sus productos, es el asistente digital más increíble que jamás tendrás. Cuando hablaba con Microsoft, en el acuerdo que OpenAI y Microsoft hicieron donde Microsoft invirtió en la empresa, se definió como un sistema que generará cientos de miles de millones de ingresos. Y en el propio sitio web de OpenAI, lo definen como sistemas altamente autónomos que superan a los humanos en la mayoría de los trabajos económicamente valiosos. Esto no es una visión coherente de una sola tecnología. Estas son definiciones muy diferentes que se expresan en voz alta al público que necesita ser movilizado para evitar la regulación o para obtener más aceptación del consumidor en la búsqueda de la industria o para obtener más capital y recursos para continuar con este viaje con definiciones ambiguas.

Hablando de diferentes definiciones a lo largo del tiempo, en 2015, en un blog que Sam Altman escribió antes de que OpenAI se anunciara oficialmente, delineó explícitamente el riesgo existencial al decir: "El desarrollo de inteligencia de máquina superhumana es probablemente la mayor amenaza para la existencia continua de la humanidad. Hay otras amenazas que creo que son más seguras de suceder, por ejemplo, un virus diseñado, pero la IA es probablemente la forma más probable de destruir todo en general". Cuando Altman escribe para el público o habla para el público, no solo tiene al público como audiencia en mente, hay otras personas a las que está tratando de motivar o movilizar cuando dice estas cosas. Y en ese momento particular, Altman estaba tratando de convencer a Elon Musk para que se uniera a él en la co-fundación de OpenAI. Y Musk, en particular, estaba dedicando todo su tiempo a sonar la alarma sobre lo que veía como una gran amenaza existencial que la IA podría representar. Así que en ese blog, si miras el lenguaje que Altman usa lado a lado con el lenguaje que Musk estaba usando en ese momento, se refleja en todas las cosas que Musk estaba diciendo.


⚔️ Capítulo 4: ¿Realmente Sam Altman Superó a Elon Musk?

Identical. Hace 10 años, Musk estaba yendo a podcasts, diciendo, tuiteando, lo que sea, que el mayor riesgo existencial para la humanidad era la IA. Sí. Y así, sabes, como su paréntesis, hay otras cosas que podrían suceder que podrían ser más probables, como virus diseñados. Es porque hasta entonces Altman había estado hablando solo sobre virus diseñados. Y ahora que necesita pivotar para hablar a una audiencia de uno, a Musk, necesita resolver la contradicción entre lo que ahora está elevando como su nuevo miedo central y lo que había estado diciendo anteriormente.

¿Estás diciendo que Sam Altman manipuló a Musk porque Elon terminó donando una gran cantidad de dinero a OpenAI y co-fundando la empresa, creo, con Sam Altman?

Elon Musk terminó co-fundando la empresa con Altman. Y ciertamente desde la perspectiva de Musk, siente que fue manipulado porque siente que Altman estaba ingeniosamente usando su lenguaje de una manera que haría que Musk confiara en él como un socio en este esfuerzo. Y, por supuesto, Musk se va. A través de algunos de los documentos que salieron durante la demanda en la que Musk y Altman están involucrados ahora, ha quedado claro que hubo un grado en el que Musk fue realmente excluido de la toma de decisiones.

Así que por eso se queda con esta intensa vendetta personal contra Altman, diciendo que de alguna manera Altman lo engañó para que fuera parte de esto. Así que en 2015, Sam Altman está escribiendo estos blogs diciendo que esta es, sabes, una de las mayores amenazas existenciales. Al mismo tiempo, en 2015, Musk está dando algunos discursos muy famosos en ese momento en el MIT. Dijo que la IA era la mayor amenaza existencial y comparó el desarrollo de la IA con invocar al demonio.

Y lo que estás diciendo aquí es que Sam Altman estaba simplemente reflejando el lenguaje que Elon estaba usando para involucrar a Elon en OpenAI. Y luego parece que, y nuevamente, hay un caso legal en curso ahora que Sam podría haber manipulado a Elon de alguna manera.

Sí. Así que sabemos por la demanda y los documentos que han salido en la demanda que Ilia Sutskever, quien era el científico jefe de OpenAI en ese momento, y Greg Brockman, el director de tecnología en ese momento, cuando estaban decidiendo si mantener OpenAI como una organización sin fines de lucro, porque originalmente se fundó como una organización sin fines de lucro, decidieron que necesitaban crear una entidad con fines de lucro, pero la pregunta era quién debería ser el CEO de esta entidad con fines de lucro. ¿Debería ser Musk o debería ser Altman? Porque eran los dos co-presidentes de la organización sin fines de lucro. Y en los correos electrónicos, quedó claro que Ilia y Greg primero eligieron a Musk para ser el CEO. Pero a través de mi investigación, descubrí que Altman luego apeló personalmente a Greg Brockman, quien era amigo suyo, que se conocían desde hace muchos años en la escena de Silicon Valley, y dijo: "¿No crees que sería un poco peligroso tener a Musk como CEO de esta empresa, de esta nueva entidad con fines de lucro, porque, sabes, es un tipo famoso, tiene muchas presiones en el mundo, podría ser amenazado, podría actuar de manera errática, podría ser impredecible. ¿Realmente queremos que una tecnología que podría ser superpoderosa en el futuro termine en manos de este hombre?". Y eso convenció a Greg, y Greg luego convenció a Ilia, sabes, creo que hay un punto aquí. ¿Realmente queremos darle tanto poder a Musk? Y es por eso que Musk luego se va, porque luego los dos cambian su lealtad. Dicen: "En realidad, queremos que Altman sea el CEO". Y luego Musk dice: "Si no soy el CEO, me voy".


🧠 Capítulo 5: Lo que la Gente Se Equivoca Acerca de Sam Altman

Parece que Sam nuevamente logró persuadir a alguien para que hiciera algo. Mhm. Supongo que esto plantea la pregunta, ¿qué piensas de Sam Altman? Creo que es una figura muy controvertida. Hiciste una pausa interesante. Es una pausa donde alguien intenta seleccionar sus palabras. Bueno, esto es lo que es tan interesante sobre esas entrevistas, es que las personas están extremadamente polarizadas sobre Altman. Nadie tiene sentimientos intermedios sobre él. O piensan que es el mejor líder tecnológico de esta generación, similar a Steve Jobs de la era moderna, o piensan que es realmente manipulador, un abusador y un mentiroso. Y lo que me di cuenta, porque entrevisté a tantas personas, es que realmente se reduce a cuál es la visión de esa persona sobre el futuro y cuáles son sus objetivos. Así que si te alineas con la visión de Altman sobre el futuro, vas a pensar que es el mayor activo que puedes tener a tu lado porque este hombre es realmente persuasivo. Es increíble contando historias, es increíble movilizando capital, reclutando talento, obteniendo todos los insumos que necesitas para hacer que ese futuro suceda. Pero si no estás de acuerdo con su visión del futuro, entonces comienzas a sentir que estás siendo manipulado por él para apoyar su visión incluso si no estás de acuerdo con ella. Y esta es la historia especialmente de Dario Amodei, CEO de Anthropic, quien originalmente fue un ejecutivo en OpenAI. Así que para quienes no lo sepan, Dario ahora dirige Anthropic, que es el creador de Claude. Muchas personas probablemente están más familiarizadas con Claude. Sí. Y es uno de los mayores competidores de OpenAI. Y Amodei, en el momento en que era un exejecutivo de OpenAI, pensó que Altman estaba en la misma página que él y luego, con el tiempo, comenzó a sentir que Altman estaba, de hecho, en la página opuesta a él y sintió que Altman había usado la inteligencia, las capacidades y habilidades de Amodei para construir cosas y llevar a cabo una visión del futuro con la que en realidad no estaba de acuerdo. Y es por eso que las personas terminan con este mal sabor en la boca.

He estado cubriendo la industria tecnológica durante más de ocho años y he cubierto muchas empresas. He cubierto Meta, Google, Microsoft, además de OpenAI. Y OpenAI y Altman son la única figura con la que he visto este grado de polarización donde las personas no pueden decidir si es el mejor o el peor.


⚔️ Capítulo 6: La Lucha por el Poder

Mencionaste a Dario y me parece realmente interesante observar cómo las citas de las personas evolucionan con el tiempo según sus incentivos. Así que estuve mirando todas las cosas que han dicho públicamente en sus podcasts, en sus publicaciones de blogs para ver cómo ha evolucionado con el tiempo. Y Dario, quien fue el ex VP de investigación de OpenAI y ahora se ha mudado a Anthropic, que está tomando un enfoque ligeramente diferente para desarrollar IA, dijo en 2017, mientras aún estaba en OpenAI, que esta es una cita: "Creo que en el extremo, es el estilo de miedo de Nick Bostrom que una AGI podría destruir a la humanidad. No puedo ver ninguna razón en principio por la cual eso no podría suceder. Mi posibilidad de que algo salga realmente mal a gran escala de la civilización humana podría estar entre el 10% y el 25%".

Y también mencionaste a Ilia, quien fue cofundador de OpenAI y luego se fue. Supongo que la primera pregunta que haría es: ¿por qué se fue?

Es una gran pregunta. Así que él fue fundamental en tratar de despedir a Sam Altman y es otra de las personas que, con el tiempo, comenzó a sentir que estaba siendo manipulado por Altman hacia contribuir en algo en lo que no creía. Y porque entrevisté a muchas personas, Ilia en particular tenía dos pilares que le importaban profundamente. Uno es asegurarse de que lleguemos a la llamada AGI y el otro es asegurarse de que lleguemos a ella de manera segura. Y sintió que Altman estaba socavando ambas cosas. Sintió que Altman estaba creando un ambiente muy caótico dentro de la empresa, donde estaba enfrentando a los equipos entre sí, donde estaba diciendo cosas diferentes a diferentes personas.

¿Alguna vez has hablado con él?

Lo he hecho. Así que lo entrevisté en 2019 para un perfil que hice de OpenAI para MIT Technology Review. Y en 2019, tiene una cita donde dice: "El futuro va a ser bueno para las IA independientemente. Sería bueno si también fuera bueno para los humanos. No es que vaya a odiar a los humanos o quiera hacerles daño, pero simplemente va a ser tan poderoso. Y creo que una buena analogía sería la forma en que los humanos tratan a los animales. No es que odiemos a los animales. Creo que los humanos aman a los animales y tengo mucho afecto por ellos. Pero cuando llega el momento de construir una carretera entre dos ciudades, no estamos pidiendo permiso a los animales. Simplemente lo hacemos porque es importante para nosotros. Y creo que, por defecto, esa es la relación que va a haber entre nosotros y la IA, que son realmente autónomas y operan en su propio nombre".

Y eso fue en 2019, el año en que lo entrevistaste. Una de las cosas que siento que deberíamos retroceder para examinar es volver a esta idea de qué es realmente la inteligencia artificial y qué queremos decir con inteligencia. Y una gran parte de las opiniones de las diferentes personas y las citas que estás leyendo deriva de una creencia específica que cada uno tiene en esta pregunta de qué es la inteligencia, qué constituye la inteligencia.

Para Ilia, a lo largo de su carrera de investigación, ha sentido que, en última instancia, nuestros cerebros son modelos estadísticos gigantes. Esto no es algo que realmente sepamos, pero esta es su propia hipótesis, también la hipótesis de su mentor, Jeffrey Hinton, quien también estuvo en este podcast. Esta es la razón por la que tienen una fuerte convicción en la idea de construir sistemas de IA que son modelos estadísticos y que este enfoque particular conducirá a sistemas inteligentes como nosotros somos inteligentes. Es una hipótesis que tienen. No es algo que haya sido probado por la ciencia. Y algunas personas están en desacuerdo vehementemente con ellos sobre esta cosa particular. Pero si te pones en sus zapatos y asumes que su hipótesis es verdadera, que nuestros cerebros son en realidad motores estadísticos y que estos sistemas que están construyendo también son motores estadísticos, que están haciendo más grandes y más grandes hasta que se vuelvan del tamaño del cerebro humano. Por eso dicen que hacer esta comparación donde el sistema se volverá igual a la inteligencia humana y luego tal vez exceda la inteligencia humana es relevante en su marco.

Ilia dio una charla en un momento en esta conferencia de investigación de IA muy prominente que sucede cada año llamada "sistemas de procesamiento de información neuronal". Es un trabalenguas, pero dio esta conferencia donde mostró este gráfico del tamaño de los cerebros y la inteligencia de una especie. Y es aproximadamente lineal. Cuanto más grande es el tamaño del cerebro, más inteligente es la especie. Y así, para él, piensa que está construyendo un cerebro digital porque cree que los cerebros son solo motores estadísticos. Así que, desde esa lógica, es como: "Está bien, si construimos un motor estadístico más grande que el cerebro humano, entonces, según este gráfico, será más inteligente y luego estaremos sujetos al mismo trato que hemos sometido a los animales". Pero es muy importante entender que estas son hipótesis científicas de individuos específicos dentro de la comunidad de investigación de IA y hay mucho debate sobre si este es realmente el caso. Algunos de los críticos más grandes dicen que es muy reductivo pensar en nuestros cerebros como simplemente motores estadísticos.


🧬 Capítulo 7: La Agenda Imperial

Así que, ¿por qué lo están haciendo? Quiero decir, has entrevistado a todas estas personas. Creo que son, ¿300 personas en total, 80 o 90 de ellas de OpenAI, el creador de ChatGPT? ¿Por qué crees que lo están haciendo?

Creo que están impulsados por una agenda imperial. Y es por eso que llamo a estas empresas imperios de IA. ¿Qué quieres decir con una agenda imperial? ¿Qué significa ese término?

Imperio es la única metáfora que he encontrado que encapsula completamente todas las dimensiones de lo que hacen estas empresas y la escala en la que operan y lo que los motiva a hacer lo que hacen. Hay muchos paralelismos que ves entre lo que llamo los imperios de la IA y los imperios del pasado. Reclaman recursos que no son suyos en la búsqueda de entrenar estos modelos. Esa es la data de individuos, la propiedad intelectual de artistas, escritores y creadores. Están apropiándose de tierras para construir estas instalaciones de supercomputación para entrenar los próximos modelos de generación. En segundo lugar, explotan una cantidad extraordinaria de mano de obra. Contratan a cientos de miles de trabajadores en todo el mundo, incluidos en los EE. UU., para, en última instancia, hacer estas tecnologías. Podemos hablar más sobre eso. Y también diseñan sus herramientas para automatizar el trabajo, de modo que cuando se despliegan las tecnologías, también afecta los derechos laborales porque erosiona los derechos laborales. Y esta es una elección política que han tomado.

En tercer lugar, monopolizan la producción de conocimiento. Y así proyectan esta idea de que son los únicos que realmente entienden cómo funciona la tecnología. Y así, si al público no le gusta, es porque no saben lo suficiente sobre esta tecnología. Hacen esto al público. Hacen esto a los responsables de políticas. Y también han capturado a la mayoría de los científicos que están trabajando en comprender las limitaciones y capacidades de la IA.

¿Crees que están manipulando al público de alguna manera?

Lo están haciendo, sí. Así que si la mayoría de los científicos climáticos en el mundo fueran financiados por empresas de combustibles fósiles, ¿crees que obtendríamos una imagen precisa de la crisis climática? No. Y de la misma manera, la industria de la IA emplea y financia a la mayoría de los investigadores de IA en el mundo. Así que establecen la agenda de la investigación en IA de maneras sutiles, simplemente canalizando dinero hacia sus prioridades para que solo se produzcan ciertos tipos de investigaciones sobre IA. Pero también censurarán a los investigadores cuando no les guste lo que el investigador ha encontrado. Y así hablo sobre el caso de la Dra. Timnit Gebru en mi libro, quien fue la co-líder del equipo de IA ética en Google, cuando fue contratada para criticar los tipos de sistemas de IA que Google estaba construyendo. Luego co-escribió un artículo de investigación crítico que mostraba cómo los modelos de lenguaje grandes estaban llevando a ciertos tipos de resultados dañinos. Y en un intento de tratar de detener esta investigación de ser publicada, Google terminó despidiendo a Gebru y luego despidió a su otro co-líder, Margaret Mitchell.

Y así controlan y aplastan la investigación que es inconveniente para la agenda del imperio.


🧾 Conclusión

Este es un resumen del transcript limpiado. Se han corregido errores ortográficos y de formato, manteniendo el contenido y el significado original. Si necesitas más ajustes o un formato diferente, házmelo saber.



TRANSCRIPT LIMPIO de Youtube

Monica Limpieza del Transcript Introducción Mucho de lo que está sucediendo hoy en la industria de la inteligencia artificial es extremadamente inhumano. Pero esto es solo yo jugando al abogado del diablo. Lógicamente, podría ser el caso que la civilización que acelere su investigación con IA será la civilización superior.

Predicciones y Beneficios No, no es así. Esta es una predicción que estás haciendo, ¿verdad? La que está haciendo Zuckerberg. Y, ¿sabes cuál es la característica común de todos ellos? Se benefician enormemente de este mito. Tengo todos estos documentos internos que muestran que están tratando intencionalmente de crear esa sensación dentro del público para poder extraer y explotar. Entonces, ¿qué hacemos al respecto? Necesitamos desmantelar los imperios de la IA.

He estado cubriendo la industria tecnológica durante más de 8 años, he entrevistado a más de 250 personas, incluidos ex-empleados y ejecutivos actuales de OpenAI. Puedo decirte que hay muchos paralelismos entre los imperios de la IA y los imperios del pasado. Por ejemplo, reclaman la propiedad intelectual de artistas, escritores y creadores en la búsqueda de entrenar estos modelos.

Además, explotan una cantidad extraordinaria de mano de obra, lo que rompe la escalera profesional, porque alguien es despedido y luego trabaja para entrenar los modelos en el mismo trabajo del que fue despedido, lo que perpetuará más despidos si ese modelo desarrolla esa habilidad. Y cuando hablan de que se van a crear nuevos trabajos que ni siquiera podemos imaginar, muchos de los trabajos que se crean son mucho peores que los que estaban allí. Luego está la crisis ambiental y de salud pública que estas empresas han creado y cómo también pueden gastar cientos de millones para intentar eliminar cualquier posible legislación que se interponga en su camino y censurar a los investigadores que son inconvenientes para la agenda del imperio.

Lo que estoy diciendo no es que estas tecnologías no tengan utilidad. Es que la producción de estas tecnologías en este momento está causando mucho daño a las personas. Pero tenemos investigaciones que muestran que las mismas capacidades podrían desarrollarse de una manera diferente que no tenga todas estas consecuencias no deseadas. Así que hablemos de todo eso.

Suscripción y Apoyo Esto es muy interesante para mí. Mi equipo me dio este informe para mostrarme cuántos de ustedes que ven este programa están suscritos. Algunos de ustedes nos han dicho que se han desuscrito del canal de forma aleatoria. Así que tengo un favor que pedirles a todos. Por favor, ¿podrían verificar ahora si han presionado el botón de suscripción si son espectadores regulares del programa y les gusta lo que hacemos aquí? Estamos acercándonos a un hito significativo en este programa en términos de número de suscriptores. Así que si hay una cosa simple y gratuita que pueden hacer para ayudarnos, a mi equipo y a todos aquí a mantener este programa gratuito y a seguir mejorando año tras año y semana tras semana, es simplemente presionar ese botón de suscripción y verificar si lo han hecho.

Lo único que les pediré, ¿tenemos un acuerdo? Si lo hacen, les diré lo que haré. Me aseguraré de que cada semana, cada mes, luchemos más y más para traerles a los invitados y las conversaciones que quieren escuchar. He mantenido esa promesa desde el principio y no los decepcionaré. Por favor, ayúdenos. Lo aprecio mucho. Sigamos con el programa.

Capítulo 2: Por qué Algunos Insiders Dicen que la IA Está Impulsada Más por el Beneficio que por el Progreso Karen, has escrito este libro titulado "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI". Supongo que mi primera pregunta es: ¿cuál fue la investigación y el viaje que realizaste para escribir este libro del que vamos a hablar y los temas dentro de él?

Tomé una ruta extraña hacia el periodismo. Estudié ingeniería mecánica en el MIT y, cuando me gradué, me mudé a San Francisco, me uní a una startup tecnológica y me convertí en parte de Silicon Valley. Básicamente, recibí una educación sobre lo que es Silicon Valley porque, unos meses después de unirme a una startup muy orientada a la misión que se enfocaba en construir tecnologías que ayudarían a facilitar la lucha contra el cambio climático, la junta despidió al CEO porque la empresa no era rentable. Y este fue, en retrospectiva, un momento muy crucial para mí porque pensé: "Si este centro está orientado a construir tecnologías rentables y muchos de los problemas del mundo que creo que necesitan resolverse no son problemas rentables, entonces, ¿qué estamos haciendo aquí?".

Tuve un poco de crisis en ese momento, pensé: "Bueno, acabo de pasar 4 años preparándome para esta carrera que ahora no creo que sea adecuada para mí". Así que pensé: "Bueno, podría intentar algo totalmente diferente". Siempre me ha gustado escribir y así fue como, después de 2 años, llegué a un puesto en MIT Technology Review cubriendo IA a tiempo completo, lo que me dio un espacio para explorar todas estas preguntas sobre quién decide qué tecnologías construimos, cómo el dinero y la ideología también impulsan la producción de esas tecnologías, y cómo asegurarnos de que realmente reimaginamos el ecosistema de innovación para que funcione para una amplia base de personas en todo el mundo. Así fue como comencé este viaje que finalmente me llevó a escribir un libro. No me di cuenta de que estaba trabajando hacia la escritura de un libro, pero a partir de 2018, cuando tomé ese trabajo, fue esencialmente el momento en el que comencé a investigar la historia que documenté en él.

Capítulo 3: Lo que 250 Insiders de OpenAI Revelaron Detrás de Puertas Cerradas Fue un momento muy oportuno para comenzar a trabajar en inteligencia artificial. Para cualquiera que no lo sepa, esto fue antes del lanzamiento de OpenAI ChatGPT, un momento que sacudió al mundo. Pero al escribir este libro, entrevistaste a muchas personas y fuiste a muchos lugares. ¿Puedes darme una idea de cuántas personas has entrevistado, a dónde te ha llevado alrededor del mundo, etc.?

Entrevisté a más de 250 personas, así que más de 300 entrevistas, más de 90 de esas personas eran ex-empleados o empleados actuales de OpenAI y ejecutivos. Así que el libro cubre la historia interna de la primera década de OpenAI y cómo llegó a donde está hoy. Pero no quería escribir un libro corporativo. Sentía muy firmemente que, para ayudar a las personas a entender el impacto de la industria de la IA, también tendríamos que viajar mucho más allá de Silicon Valley. Estas empresas nos dicen que la IA va a beneficiar a todos y esa es su misión. Pero realmente comienzas a ver que esa retórica se desmorona cuando vas a los lugares que no se parecen en nada a Silicon Valley, que no hablan nada como Silicon Valley y que tienen una historia y cultura que son fundamentalmente diferentes también. Y ahí es donde comienzas a entender realmente la verdadera realidad de cómo esta industria se está desarrollando a nuestro alrededor.

Karen, a menudo trato de dirigir las conversaciones, pero en esta situación, siento que probablemente es mi responsabilidad seguir. Así que, con eso en mente, te preguntaré: ¿dónde comienza este viaje y dónde deberíamos comenzar si hablamos de los temas del imperio de la IA, de la IA en general?

Creo que deberíamos comenzar con cuándo comenzó la IA como campo. Esto fue en 1956, y había un grupo de científicos que se reunieron en la Universidad de Dartmouth para comenzar una nueva disciplina, una disciplina científica para intentar perseguir una ambición. Y específicamente, un profesor asistente en la Universidad de Dartmouth, John McCarthy, decidió nombrar a esta disciplina inteligencia artificial. Este no fue el primer nombre que intentó. El año anterior intentó nombrarla "Estudios de Autómatas". Y la razón por la que algunos de sus colegas estaban preocupados por este nombre era porque vinculaba la idea de esta disciplina a recrear la inteligencia humana. Y en aquel entonces, como es cierto hoy, no tenemos un consenso científico sobre lo que es la inteligencia humana. No hay una definición desde la psicología, biología o neurología. De hecho, cada intento en la historia de cuantificar y clasificar la inteligencia humana ha sido impulsado por motivos nefastos. Ha sido impulsado por el deseo de probar científicamente que ciertos grupos de personas son inferiores a otros grupos de personas.

No hay metas para este campo y no hay metas para la industria cuando dicen que están tratando de recrear sistemas de IA que sean tan inteligentes como los humanos. ¿Cómo definimos eso? ¿Y cuándo llegaremos allí si no sabemos cómo definir el destino? Y lo que eso significa efectivamente es que estas empresas pueden usar el término "inteligencia general artificial" que ahora es el término para referirse a este objetivo ambicioso de recrear la inteligencia humana. Pueden usarlo como quieran y pueden definirlo y redefinirlo según lo que les convenga. Así que, en la historia de OpenAI, ha definido y redefinido muchas veces. Cuando Sam Altman habla con el Congreso, AGI es un sistema que va a curar el cáncer, resolver el cambio climático, curar la pobreza. Cuando habla con los consumidores a los que intenta vender sus productos, es el asistente digital más increíble que jamás tendrás. Cuando hablaba con Microsoft, en el acuerdo que OpenAI y Microsoft hicieron donde Microsoft invirtió en la empresa, se definió como un sistema que generará cientos de miles de millones de ingresos. Y en el propio sitio web de OpenAI, lo definen como sistemas altamente autónomos que superan a los humanos en la mayoría de los trabajos económicamente valiosos. Esto no es una visión coherente de una sola tecnología. Estas son definiciones muy diferentes que se expresan en voz alta al público que necesita ser movilizado para evitar la regulación o para obtener más aceptación del consumidor en la búsqueda de la industria o para obtener más capital y recursos para continuar con este viaje con definiciones ambiguas.

Hablando de diferentes definiciones a lo largo del tiempo, en 2015, en un blog que Sam Altman escribió antes de que OpenAI se anunciara oficialmente, delineó explícitamente el riesgo existencial al decir: "El desarrollo de inteligencia de máquina superhumana es probablemente la mayor amenaza para la existencia continua de la humanidad. Hay otras amenazas que creo que son más seguras de suceder, por ejemplo, un virus diseñado, pero la IA es probablemente la forma más probable de destruir todo en general". Cuando Altman escribe para el público o habla para el público, no solo tiene al público como audiencia en mente, hay otras personas a las que está tratando de motivar o movilizar cuando dice estas cosas. Y en ese momento particular, Altman estaba tratando de convencer a Elon Musk para que se uniera a él en la co-fundación de OpenAI. Y Musk, en particular, estaba dedicando todo su tiempo a sonar la alarma sobre lo que veía como una gran amenaza existencial que la IA podría representar. Así que en ese blog, si miras el lenguaje que Altman usa lado a lado con el lenguaje que Musk estaba usando en ese momento, se refleja en todas las cosas que Musk estaba diciendo.

Capítulo 4: ¿Realmente Sam Altman Superó a Elon Musk? Identical. Hace 10 años, Musk estaba yendo a podcasts, diciendo, tuiteando, lo que sea, que el mayor riesgo existencial para la humanidad era la IA. Sí. Y así, sabes, como su paréntesis, hay otras cosas que podrían suceder que podrían ser más probables, como virus diseñados. Es porque hasta entonces Altman había estado hablando solo sobre virus diseñados. Y ahora que necesita pivotar para hablar a una audiencia de uno, a Musk, necesita resolver la contradicción entre lo que ahora está elevando como su nuevo miedo central y lo que había estado diciendo anteriormente.

¿Estás diciendo que Sam Altman manipuló a Musk porque Elon terminó donando una gran cantidad de dinero a OpenAI y co-fundando la empresa, creo, con Sam Altman?

Elon Musk terminó co-fundando la empresa con Altman. Y ciertamente desde la perspectiva de Musk, siente que fue manipulado porque siente que Altman estaba ingeniosamente usando su lenguaje de una manera que haría que Musk confiara en él como un socio en este esfuerzo. Y, por supuesto, Musk se va. A través de algunos de los documentos que salieron durante la demanda en la que Musk y Altman están involucrados ahora, ha quedado claro que hubo un grado en el que Musk fue realmente excluido de la toma de decisiones.

Así que por eso se queda con esta intensa vendetta personal contra Altman, diciendo que de alguna manera Altman lo engañó para que fuera parte de esto. Así que en 2015, Sam Altman está escribiendo estos blogs diciendo que esta es, sabes, una de las mayores amenazas existenciales. Al mismo tiempo, en 2015, Musk está dando algunos discursos muy famosos en ese momento en el MIT. Dijo que la IA era la mayor amenaza existencial y comparó el desarrollo de la IA con invocar al demonio.

Y lo que estás diciendo aquí es que Sam Altman estaba simplemente reflejando el lenguaje que Elon estaba usando para involucrar a Elon en OpenAI. Y luego parece que, y nuevamente, hay un caso legal en curso ahora que Sam podría haber manipulado a Elon de alguna manera.

Sí. Así que sabemos por la demanda y los documentos que han salido en la demanda que Ilia Sutskever, quien era el científico jefe de OpenAI en ese momento, y Greg Brockman, el director de tecnología en ese momento, cuando estaban decidiendo si mantener OpenAI como una organización sin fines de lucro, porque originalmente se fundó como una organización sin fines de lucro, decidieron que necesitaban crear una entidad con fines de lucro, pero la pregunta era quién debería ser el CEO de esta entidad con fines de lucro. ¿Debería ser Musk o debería ser Altman? Porque eran los dos co-presidentes de la organización sin fines de lucro. Y en los correos electrónicos, quedó claro que Ilia y Greg primero eligieron a Musk para ser el CEO. Pero a través de mi investigación, descubrí que Altman luego apeló personalmente a Greg Brockman, quien era amigo suyo, que se conocían desde hace muchos años en la escena de Silicon Valley, y dijo: "¿No crees que sería un poco peligroso tener a Musk como CEO de esta empresa, de esta nueva entidad con fines de lucro, porque, sabes, es un tipo famoso, tiene muchas presiones en el mundo, podría ser amenazado, podría actuar de manera errática, podría ser impredecible. ¿Realmente queremos que una tecnología que podría ser superpoderosa en el futuro termine en manos de este hombre?". Y eso convenció a Greg, y Greg luego convenció a Ilia, sabes, creo que hay un punto aquí. ¿Realmente queremos darle tanto poder a Musk? Y es por eso que Musk luego se va, porque luego los dos cambian su lealtad. Dicen: "En realidad, queremos que Altman sea el CEO". Y luego Musk dice: "Si no soy el CEO, me voy".

Capítulo 5: Lo que la Gente Se Equivoca Acerca de Sam Altman Parece que Sam nuevamente logró persuadir a alguien para que hiciera algo. Mhm. Supongo que esto plantea la pregunta, ¿qué piensas de Sam Altman? Creo que es una figura muy controvertida. Hiciste una pausa interesante. Es una pausa donde alguien intenta seleccionar sus palabras. Bueno, esto es lo que es tan interesante sobre esas entrevistas, es que las personas están extremadamente polarizadas sobre Altman. Nadie tiene sentimientos intermedios sobre él. O piensan que es el mejor líder tecnológico de esta generación, similar a Steve Jobs de la era moderna, o piensan que es realmente manipulador, un abusador y un mentiroso. Y lo que me di cuenta, porque entrevisté a tantas personas, es que realmente se reduce a cuál es la visión de esa persona sobre el futuro y cuáles son sus objetivos. Así que si te alineas con la visión de Altman sobre el futuro, vas a pensar que es el mayor activo que puedes tener a tu lado porque este hombre es realmente persuasivo. Es increíble contando historias, es increíble movilizando capital, reclutando talento, obteniendo todos los insumos que necesitas para hacer que ese futuro suceda. Pero si no estás de acuerdo con su visión del futuro, entonces comienzas a sentir que estás siendo manipulado por él para apoyar su visión incluso si no estás de acuerdo con ella. Y esta es la historia especialmente de Dario Amodei, CEO de Anthropic, quien originalmente fue un ejecutivo en OpenAI. Así que para quienes no lo sepan, Dario ahora dirige Anthropic, que es el creador de Claude. Muchas personas probablemente están más familiarizadas con Claude. Sí. Y es uno de los mayores competidores de OpenAI. Y Amodei, en el momento en que era un exejecutivo de OpenAI, pensó que Altman estaba en la misma página que él y luego, con el tiempo, comenzó a sentir que Altman estaba, de hecho, en la página opuesta a él y sintió que Altman había usado la inteligencia, las capacidades y habilidades de Amodei para construir cosas y llevar a cabo una visión del futuro con la que en realidad no estaba de acuerdo. Y es por eso que las personas terminan con este mal sabor en la boca.

He estado cubriendo la industria tecnológica durante más de ocho años y he cubierto muchas empresas. He cubierto Meta, Google, Microsoft, además de OpenAI. Y OpenAI y Altman son la única figura con la que he visto este grado de polarización donde las personas no pueden decidir si es el mejor o el peor.

Capítulo 6: La Lucha por el Poder: ¿Quién Intentó Despedir a Sam Altman y Por Qué? Mencionaste a Dario y me parece realmente interesante observar cómo las citas de las personas evolucionan con el tiempo según sus incentivos. Así que estuve mirando todas las cosas que han dicho públicamente en sus podcasts, en sus publicaciones de blogs para ver cómo ha evolucionado con el tiempo. Y Dario, quien fue el ex VP de investigación de OpenAI y ahora se ha mudado a Anthropic, que está tomando un enfoque ligeramente diferente para desarrollar IA, dijo en 2017, mientras aún estaba en OpenAI, que esta es una cita: "Creo que en el extremo, es el estilo de miedo de Nick Bostrom que una AGI podría destruir a la humanidad. No puedo ver ninguna razón en principio por la cual eso no podría suceder. Mi posibilidad de que algo salga realmente mal a gran escala de la civilización humana podría estar entre el 10% y el 25%".

Y también mencionaste a Ilia, quien fue cofundador de OpenAI y luego se fue. Supongo que la primera pregunta que haría es: ¿por qué se fue?

Es una gran pregunta. Así que él fue fundamental en tratar de despedir a Sam Altman y es otra de las personas que, con el tiempo, comenzó a sentir que estaba siendo manipulado por Altman hacia contribuir en algo en lo que no creía. Y porque entrevisté a muchas personas, Ilia en particular tenía dos pilares que le importaban profundamente. Uno es asegurarse de que lleguemos a la llamada AGI y el otro es asegurarse de que lleguemos a ella de manera segura. Y sintió que Altman estaba socavando ambas cosas. Sintió que Altman estaba creando un ambiente muy caótico dentro de la empresa, donde estaba enfrentando a los equipos entre sí, donde estaba diciendo cosas diferentes a diferentes personas.

¿Alguna vez has hablado con él?

Lo he hecho. Así que lo entrevisté en 2019 para un perfil que hice de OpenAI para MIT Technology Review. Y en 2019, tiene una cita donde dice: "El futuro va a ser bueno para las IA independientemente. Sería bueno si también fuera bueno para los humanos. No es que vaya a odiar a los humanos o quiera hacerles daño, pero simplemente va a ser tan poderoso. Y creo que una buena analogía sería la forma en que los humanos tratan a los animales. No es que odiemos a los animales. Creo que los humanos aman a los animales y tengo mucho afecto por ellos. Pero cuando llega el momento de construir una carretera entre dos ciudades, no estamos pidiendo permiso a los animales. Simplemente lo hacemos porque es importante para nosotros. Y creo que, por defecto, esa es la relación que va a haber entre nosotros y la IA, que son realmente autónomas y operan en su propio nombre".

Y eso fue en 2019, el año en que lo entrevistaste. Una de las cosas que siento que deberíamos retroceder para examinar es volver a esta idea de qué es realmente la inteligencia artificial y qué queremos decir con inteligencia. Y una gran parte de las opiniones de las diferentes personas y las citas que estás leyendo deriva de una creencia específica que cada uno tiene en esta pregunta de qué es la inteligencia, qué constituye la inteligencia.

Para Ilia, a lo largo de su carrera de investigación, ha sentido que, en última instancia, nuestros cerebros son modelos estadísticos gigantes. Esto no es algo que realmente sepamos, pero esta es su propia hipótesis, también la hipótesis de su mentor, Jeffrey Hinton, quien también estuvo en este podcast. Esta es la razón por la que tienen una fuerte convicción en la idea de construir sistemas de IA que son modelos estadísticos y que este enfoque particular conducirá a sistemas inteligentes como nosotros somos inteligentes. Es una hipótesis que tienen. No es algo que haya sido probado por la ciencia. Y algunas personas están en desacuerdo vehementemente con ellos sobre esta cosa particular. Pero si te pones en sus zapatos y asumes que su hipótesis es verdadera, que nuestros cerebros son en realidad motores estadísticos y que estos sistemas que están construyendo también son motores estadísticos, que están haciendo más grandes y más grandes hasta que se vuelvan del tamaño del cerebro humano. Por eso dicen que hacer esta comparación donde el sistema se volverá igual a la inteligencia humana y luego tal vez exceda la inteligencia humana es relevante en su marco.

Ilia dio una charla en un momento en esta conferencia de investigación de IA muy prominente que sucede cada año llamada "sistemas de procesamiento de información neuronal". Es un trabalenguas, pero dio esta conferencia donde mostró este gráfico del tamaño de los cerebros y la inteligencia de una especie. Y es aproximadamente lineal. Cuanto más grande es el tamaño del cerebro, más inteligente es la especie. Y así, para él, piensa que está construyendo un cerebro digital porque cree que los cerebros son solo motores estadísticos. Así que, desde esa lógica, es como: "Está bien, si construimos un motor estadístico más grande que el cerebro humano, entonces, según este gráfico, será más inteligente y luego estaremos sujetos al mismo trato que hemos sometido a los animales". Pero es muy importante entender que estas son hipótesis científicas de individuos específicos dentro de la comunidad de investigación de IA y hay mucho debate sobre si este es realmente el caso. Algunos de los críticos más grandes dicen que es muy reductivo pensar en nuestros cerebros como simplemente motores estadísticos.

Capítulo 7: La Verdadera Razón por la que los Gigantes Tecnológicos Están Compitiendo por Construir IA Así que, ¿por qué lo están haciendo? Quiero decir, has entrevistado a todas estas personas. Creo que son, ¿300 personas en total, 80 o 90 de ellas de OpenAI, el creador de ChatGPT? ¿Por qué crees que lo están haciendo?

Creo que están impulsados por una agenda imperial. Y es por eso que llamo a estas empresas imperios de IA. ¿Qué quieres decir con una agenda imperial? ¿Qué significa ese término?

Imperio es la única metáfora que he encontrado que encapsula completamente todas las dimensiones de lo que hacen estas empresas y la escala en la que operan y lo que los motiva a hacer lo que hacen. Hay muchos paralelismos que ves entre lo que llamo los imperios de la IA y los imperios del pasado. Reclaman recursos que no son suyos en la búsqueda de entrenar estos modelos. Esa es la data de individuos, la propiedad intelectual de artistas, escritores y creadores. Están apropiándose de tierras para construir estas instalaciones de supercomputación para entrenar los próximos modelos de generación. En segundo lugar, explotan una cantidad extraordinaria de mano de obra. Contratan a cientos de miles de trabajadores en todo el mundo, incluidos en los EE. UU., para, en última instancia, hacer estas tecnologías. Podemos hablar más sobre eso. Y también diseñan sus herramientas para automatizar el trabajo, de modo que cuando se despliegan las tecnologías, también afecta los derechos laborales porque erosiona los derechos laborales. Y esta es una elección política que han tomado.

En tercer lugar, monopolizan la producción de conocimiento. Y así proyectan esta idea de que son los únicos que realmente entienden cómo funciona la tecnología. Y así, si al público no le gusta, es porque no saben lo suficiente sobre esta tecnología. Hacen esto al público. Hacen esto a los responsables de políticas. Y también han capturado a la mayoría de los científicos que están trabajando en comprender las limitaciones y capacidades de la IA.

¿Crees que están manipulando al público de alguna manera?

Lo están haciendo, sí. Así que si la mayoría de los científicos climáticos en el mundo fueran financiados por empresas de combustibles fósiles, ¿crees que obtendríamos una imagen precisa de la crisis climática? No. Y de la misma manera, la industria de la IA emplea y financia a la mayoría de los investigadores de IA en el mundo. Así que establecen la agenda de la investigación en IA de maneras sutiles, simplemente canalizando dinero hacia sus prioridades para que solo se produzcan ciertos tipos de investigaciones sobre IA. Pero también censurarán a los investigadores cuando no les guste lo que el investigador ha encontrado. Y así hablo sobre el caso de la Dra. Timnit Gebru en mi libro, quien fue la co-líder del equipo de IA ética en Google, cuando fue contratada para criticar los tipos de sistemas de IA que Google estaba construyendo. Luego co-escribió un artículo de investigación crítico que mostraba cómo los modelos de lenguaje grandes estaban llevando a ciertos tipos de resultados dañinos. Y en un intento de tratar de detener esta investigación de ser publicada, Google terminó despidiendo a Gebru y luego despidió a su otro co-líder, Margaret Mitchell.

Y así controlan y aplastan la investigación que es inconveniente para la agenda del imperio.

Conclusión Este es un resumen del transcript limpiado. Se han corregido errores ortográficos y de formato, manteniendo el contenido y el significado original. Si necesitas más ajustes o un formato diferente, házmelo saber.


TRANSCRIPT ORIGINAL INGLES

0:00 So much of what's happening today in the AI industry is extremely inhumane. But this is me playing devil's advocate. 0:066 segundosAnd logically, it could be the case that the civilization that accelerate their research with AI is going to be the superior civilization. 0:1313 segundosNo, it's not. This is a prediction that you're making, right? Making Zuckerberg's making. 0:1818 segundosAnd do you know what the common feature of all of them is? They profit enormously off of this myth. You know, I have all these internal documents showing that they're purposely trying to 0:2626 segundoscreate that feeling within the public so that they can extract and exploit and extract and exploit. So, what do we do about it? We need to break up the empires of AI. 0:3535 segundosYou know, I've been covering the tech industry for over 8 years, interviewed over 250 people, including former or current OpenAI employees and executives. 0:4242 segundosAnd I can tell you that there are many parallels between the empires of AI and the empires of old, right? like Lelay claimed the intellectual property of artists, writers, and creators in the pursuit of training these models. 0:5252 segundosSecond, they exploit an extraordinary amount of labor, which breaks the career ladder because someone gets laid off and then they work to train the models on 1:001 minutothe very job that they were just laid off in, which will then perpetuate more layoffs if that model then develops that skill. And when they talk about that there's going to be some new jobs 1:081 minuto y 8 segundoscreated that we can't even imagine, a lot of the jobs that are created are way worse than the jobs that were there. And then there's the environmental and 1:151 minuto y 15 segundospublic health crisis that these companies have created and how they're able to also spend hundreds of millions to try and kill every possible piece of 1:231 minuto y 23 segundoslegislation that gets in their way and will censor researchers that are inconvenient to the empire's agenda. But what I'm saying is not that these 1:321 minuto y 32 segundostechnologies don't have utility. It's that the production of these technologies right now is exacting a lot of harm on people. But we have research 1:391 minuto y 39 segundosthat shows that the very same capabilities could be developed in a different way that doesn't have all of these unintended consequences. So let's talk about all of that. 1:531 minuto y 53 segundosThis is super interesting to me. My team given me this report to show me how many of you that watch this show subscribe. 1:571 minuto y 57 segundosAnd some of you have told us according to this that you are unsubscribed from the channel randomly. So favor to ask all of you. Please could you check right now if you've hit the subscribe button 2:062 minutos y 6 segundosif you are a regular viewer of the show and you like what we do here. We're approaching quite a significant landmark on this show in terms of a subscriber number. So if there was one simple free 2:142 minutos y 14 segundosthing that you could do to help us, my team, everyone here to keep this show free, to keep it improving year over year and week over week, it is just to 2:222 minutos y 22 segundoshit that subscribe button and to double check if you've hit it. Only thing I'll ever ask of you, do we have a deal? If you do it, I'll tell you what I'll do. 2:282 minutos y 28 segundosI'll make sure every single week, every single month, we fight harder and harder and harder and harder to bring you the guests and conversations that you want to hear. I've stayed true to that 2:362 minutos y 36 segundospromise since the very beginning of the D of Sio and I will not let you down. Please help us. Really appreciate it. Let's get on with the show. Capítulo 2: Why Some Insiders Say AI Is Driven More By Profit Than Progress 2:472 minutos y 47 segundosKaren, how you've written this book in front of me here called Empire of AI: 2:522 minutos y 52 segundosDreams and Nightmares in Sam Alman's Open AI. I guess my first question is what is the research and the journey you 3:003 minutoswent on in order to write this book we're going to talk about and the subjects within it today I took a strange route into journalism I studied mechanical engineering at MIT 3:093 minutos y 9 segundosand so when I graduated I moved to San Francisco I joined a tech startup I became part of Silicon Valley and I 3:163 minutos y 16 segundosbasically received an education in what Silicon Valley is about because a few months into joining a very missiondriven startup that was focused on building 3:233 minutos y 23 segundostechnologies that would help facilitate the fight against climate change. The board fired the CEO because the company was not profitable. And this was in 3:323 minutos y 32 segundoshindsight a very pivotal moment for me because I thought if this hub is ultimately geared towards building 3:403 minutos y 40 segundosprofitable technologies and many of the problems in the world that I think need solved are not profitable problems like 3:473 minutos y 47 segundosclimate change. Then what are we actually doing here? like what how did we get to a point where innovation is 3:543 minutos y 54 segundosnot actually necessarily working in the public benefit and sometimes even undermining the public benefit in pursuit of profit. In that moment, I had a bit of a crisis where I thought, well, 4:064 minutos y 6 segundosI just spent 4 years trying to set myself up for this career that I now don't think I am cut out for. And I 4:164 minutos y 16 segundosthought, well, I might as well just try something totally different. I've always liked writing and that's how after 2 4:224 minutos y 22 segundosyears I landed at a role at MIT technology review covering AI full-time and that gave me a space to then explore 4:314 minutos y 31 segundosall of these questions of who gets to decide what technologies we build how does money and ideology also drive the production of those technologies and how 4:404 minutos y 40 segundosdo we ultimately make sure that we actually reimagine the innovation ecosystem to work for a broad base of 4:484 minutos y 48 segundospeople all around the world. And so that is kind of how I then set off on this journey of ultimately writing a book. I 4:564 minutos y 56 segundosdidn't realize that I was working towards writing a book, but starting in 2018 when I took that job was 5:045 minutos y 4 segundosessentially the moment in which I began researching the story that I I document in it. Capítulo 3: What 250 OpenAI Insiders Revealed Behind Closed Doors 5:095 minutos y 9 segundosA very timely time to start working in artificial intelligence. For anyone that doesn't know, this is pre OpenAI chat GPT launch moment that shook the world. 5:195 minutos y 19 segundosBut in writing this book, you interviewed a lot of people and went to a lot of places. Can you give me a flavor of how many people you've interviewed, where it's taken you around the world, etc. 5:275 minutos y 27 segundosI interviewed over 250 people. So over 300 interviews, over 90 of those people were former or current OpenAI employees 5:355 minutos y 35 segundosand executives. So the book covers the inside story of opening eyes's first decade and how it ultimately got to 5:425 minutos y 42 segundoswhere it is today. But I didn't want to write a corporate book. I felt very strongly that in order to help people 5:495 minutos y 49 segundosunderstand the impact of the AI industry, we would also have to travel well beyond Silicon Valley. These companies tell us that AI is going to 5:585 minutos y 58 segundosbenefit everyone and that's their mission. But you really start to see that rhetoric break down when you go to 6:056 minutos y 5 segundosthe places that look nothing like Silicon Valley, that speak nothing like Silicon Valley, and that have a history and culture that are fundamentally 6:126 minutos y 12 segundosdifferent as well. And that's where you start to really understand the true reality of how this industry is unfolding around us. 6:226 minutos y 22 segundosKaren, I often try and steer conversations, but in this situation, I feel like it's probably my responsibility to follow. So with that 6:306 minutos y 30 segundosin mind, I'm going to ask you where does this journey begin and where should we be starting if we're talking about the subjects of empire of AI, AI generally 6:386 minutos y 38 segundosartificial intelligence and also I'd say one thing I'm really keen to do in this conversation which is I often see in conversations is left out is let's 6:466 minutos y 46 segundosassume that our viewers know nothing about AI. 6:486 minutos y 48 segundosYeah. So they don't know what scaling laws are or GPUs or comput or whatever and let's try and keep this as simple as we possibly can in terms of language or 6:576 minutos y 57 segundosexplain all the complicated language so that we can bring as much people with us as we possibly can. Yes. Where should we start? 7:037 minutos y 3 segundosI think we should start with when AI started as a field. So this was back in 1956 7:117 minutos y 11 segundosand there were a group of scientists that gathered at Dartmouth University to start a new discipline, a scientific discipline to try and chase an ambition. 7:207 minutos y 20 segundosAnd specifically an assistant professor at Dartmouth University, John McCarthy decided to name this discipline artificial intelligence. 7:287 minutos y 28 segundosThis was not the first name that he tried. The previous year he tried to name it Automata Studies. And the reason why some of his colleagues were 7:367 minutos y 36 segundosconcerned about this name was because it pegged the idea of this discipline to recreating human intelligence. And back 7:447 minutos y 44 segundosthen, as is true today, we have no scientific consensus around what human intelligence is. There's no definition 7:527 minutos y 52 segundosfrom psychology, biology, neurology. And in fact, every attempt in history to 7:597 minutos y 59 segundosquantify and rank human intelligence has been driven by nefarious motives. It's 8:058 minutos y 5 segundosbeen driven by a desire to prove scientifically that certain groups of people are inferior to other groups of 8:138 minutos y 13 segundospeople. There are no goalposts for this field and there are no goalposts for the industry when they say that they are 8:218 minutos y 21 segundosultimately trying to recreate AI systems that would be as smart as humans. How do we even define what that means? And when 8:298 minutos y 29 segundosare we going to get there if we don't know how to define the destination? And what that effectively means is that 8:378 minutos y 37 segundosthese companies can just use the term artificial general intelligence which is now the term to refer to this ambitious um goal to recreate human intelligence. 8:478 minutos y 47 segundosThey can use it however they want to and they can define and redefine it based on what is convenient for them. So in OpenAI's history, it has defined and 8:558 minutos y 55 segundosredefined it many times. When Sam Alman is talking with Congress, AGI is a system that's going to cure cancer, 9:029 minutos y 2 segundossolve climate change, cure poverty. When he's talking with consumers that he's trying to sell his products to, it's the 9:099 minutos y 9 segundosmost amazing digital assistant that you're ever going to have. When he was talking with Microsoft, you know, in the deal that OpenAI and Microsoft struck where Microsoft invested in the company, 9:219 minutos y 21 segundosit was defined as a system that will generate hundred billion of revenue. And on OpenAI's own website, they define it 9:289 minutos y 28 segundosas highly autonomous systems that outperform humans in most economically valuable work. This is like not a 9:369 minutos y 36 segundoscoherent vision of one technology. These are very different definitions that are spoken out loud to the audience that 9:449 minutos y 44 segundosneeds to be mobilized to ward off regulation or get more consumer buy in into the the industry's quest or to get 9:549 minutos y 54 segundosmore capital more resources for continuing on this journey with ambiguous definitions. I mean, speaking about different definitions through 10:0210 minutos y 2 segundostime, in 2015, in a blog post that Sam Waltman wrote before open air was officially announced, he explicitly outlined the existential risk by saying, 10:1210 minutos y 12 segundos"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I 10:2010 minutos y 20 segundosthink are more certain to happen, for example, an engineered virus, but AI is probably the most likely way to destroy everything 10:2810 minutos y 28 segundosin general." When Alman is writing for the public or speaking for the public, 10:3310 minutos y 33 segundoshe does not just have the public as the audience in mind, there are other people that he is trying to motivate or 10:4010 minutos y 40 segundosmobilize when he says these things. And in that particular moment, Alman was trying to convince Elon Musk to join him 10:4810 minutos y 48 segundoson co-founding OpenAI. And Musk in particular was spending all of his time 10:5510 minutos y 55 segundossounding the alarm on what he saw as a huge existential threat that AI could pose. And so in that blog post, if you 11:0311 minutos y 3 segundoslook at the the language that Alman uses side by side with the language that Musk was using at the time, it mirrors all the things that Musk was saying Capítulo 4: Did Sam Altman Really Outmaneuver Elon Musk? 11:1111 minutos y 11 segundosidentical. I mean, 10 years ago, Musk was going on podcast saying, tweeting, 11:1611 minutos y 16 segundoswhatever, that the greatest existential risk to humanity was AI. 11:1911 minutos y 19 segundosYeah. And so you know like his parenthetical there are other things that we that might actually be more likely to happen like engineered 11:2711 minutos y 27 segundosviruses. It's because up until then Alman had been talking just about engineered viruses. And so now that he 11:3511 minutos y 35 segundosneeds to pivot to speak to an audience of one to Musk. He needs to kind of resolve the contradiction between what 11:4211 minutos y 42 segundoshe's now elevating as his new central fear to be the same as Musk's new central fear with what he had previously 11:4911 minutos y 49 segundosbeen saying. So that's why he's like I think this is now even though before I said this and are you saying that Sam Alman 11:5711 minutos y 57 segundosmanipulated Musk because Elon did end up donating a huge amount of money to um open AAI and co-founding it I believe 12:0612 minutos y 6 segundoswith Sam Alman. Elon Musk did end up co-ounding it with Altman. And certainly from Musk's perspective, he does feel 12:1212 minutos y 12 segundosmanipulated because he feels like Alman was engineering his language in a way 12:2012 minutos y 20 segundosthat would make Musk trust him as a a partner in this endeavor. And of course 12:2712 minutos y 27 segundosthen Musk is leaves. Um and through some of the documents that came out during the the lawsuit that Musk and Altman are 12:3512 minutos y 35 segundosengaged in now, it has become clear that there was a degree to which Musk was actually muscled out a little bit. And 12:4312 minutos y 43 segundosso that's why he's left with this very intense personal vendetta against Altman, saying that somehow Alman 12:5112 minutos y 51 segundostricked him into being part of this. So in in 2015, Sam Alman is writing these blog posts saying this is, you know, one of the greatest existential threats. At 12:5912 minutos y 59 segundosthe same time, in 2015, Musk is doing some very famous speeches at the time at MIT. He said that AI was the biggest existential threat and compared developing AI to summoning the demon. 13:1113 minutos y 11 segundosAnd what you're saying here is you're saying that Samman was just mirroring the language that Elon was using to get Elon involved in open open AAI. And 13:1813 minutos y 18 segundoslater it appears and again there's a legal case taking place now that Sam might have muscled Elon out in some capacity. 13:2513 minutos y 25 segundosYeah. So we know from the lawsuit and the documents that have come out in the lawsuit that Ilia Sgver who is the chief 13:3313 minutos y 33 segundosscientist of OpenAI at the time and Greg Brockman chief technology officer at the time when they were deciding whether or 13:4013 minutos y 40 segundosnot to maintain OpenAI as a nonprofit because it was originally founded as a nonprofit. They decided okay we need to create a for-profit entity but the 13:4713 minutos y 47 segundosquestion was who should be the CEO of this for-profit entity. Should it be Musk or should it be Alman? because it's they were the two co-chairmen of the 13:5413 minutos y 54 segundosnonprofit. And in the emails, it became clear that Ilia and Greg first chose Musk to be the CEO. 14:0514 minutos y 5 segundosBut through my reporting, I discovered that Altman then appealed personally to Greg Brockman, who was a friend of his 14:1314 minutos y 13 segundosthat they had known, they had known each other for many years through the Silicon Valley scene, and said, "Don't you think that it would be a little bit dangerous to have Musk be the CEO of this company, 14:2614 minutos y 26 segundosthis new for-profit entity, because, you know, he's a famous guy. He has a lot of pressures in the world. He could be 14:3314 minutos y 33 segundosthreatened. He could act erratically. He could be unpredictable. And do we really want a technology that could be super 14:4114 minutos y 41 segundospowerful in the future to end up in the hands of this man? And that convinced Greg and Greg then convinced Ilia, you 14:4914 minutos y 49 segundosknow, I think there's a point here. Do we really want to give this much power to Musk? And that is why Musk then 14:5714 minutos y 57 segundosleaves because then they the two switch their allegiances. They say, "Actually, 15:0215 minutos y 2 segundoswe want Altman to be the CEO." And then Musk is like, "If I'm not CEO, I'm out." Capítulo 5: What People Get Wrong About Sam Altman 15:0615 minutos y 6 segundosSo, it sounds like Sam again managed to persuade someone to do something. Mhm. 15:1215 minutos y 12 segundosI guess this begs the question, what do you think of Sam Orman? 15:1715 minutos y 17 segundosI think he's a very controversial figure. 15:1915 minutos y 19 segundosYou did an interesting pause. It's a pause where someone tries to select their words. Well, this is this is this 15:2715 minutos y 27 segundosis what's so interesting about those interviews is people are extremely polarized on Alman there. No one has in between feelings about him. 15:3915 minutos y 39 segundosEither they think he's the greatest tech leader of this generation akin to the Steve Jobs of the modern era or they 15:4615 minutos y 46 segundosthink that he's really manipulative and an abuser and a liar. And what I realized because I interviewed so many 15:5515 minutos y 55 segundospeople is it really comes down to what that person's vision of the future is and what their goals are. So if you 16:0416 minutos y 4 segundosalign with Altman's vision of the future, you're going to think he's the greatest asset ever to have on your side because this man is really persuasive. 16:1216 minutos y 12 segundosHe's incredible at telling stories. He's incredible at mobilizing capital, at recruiting talent, at getting all the inputs that you need to then make that 16:2016 minutos y 20 segundosfuture happen. But if you don't agree with his vision of the future, then you begin to feel like you're being 16:2816 minutos y 28 segundosmanipulated by him to support his vision even if you fundamentally don't agree with it. And this is the story 16:3616 minutos y 36 segundosespecially of Daria Amade, CEO of Enthropic, who was originally an executive at OpenAI. So for people that don't know, Dario now runs anthropic 16:4516 minutos y 45 segundoswhich is the maker of Claude. A lot of people probably are more familiar with Claude. 16:4916 minutos y 49 segundosYeah. And it's one of the biggest competitors to OpenAI. 16:5316 minutos y 53 segundosAnd Amade at the time when he was an ex executive at OpenAI, 16:5916 minutos y 59 segundoshe thought that Alman was on the same page with him and then over time began to feel that Altman was actually on 17:0817 minutos y 8 segundosexactly the opposite page of him and felt that Altman had used Amade's 17:1517 minutos y 15 segundosintelligence, capabilities, skills to build things and bring about a vision of the future that he actually 17:2317 minutos y 23 segundosfundamentally didn't agree with. And so that's why people end up with this bad taste in their mouths. And so, you know, 17:3017 minutos y 30 segundosI've been covering the tech industry for over eight years and covered many companies. I've covered Meta, Google, 17:3617 minutos y 36 segundosMicrosoft in addition to Open AI. and OpenAI and Altman is it's the only figure that I've seen this degree of 17:4517 minutos y 45 segundospolarization with where people cannot decide whether he's the greatest or the worst. Capítulo 6: The Power Struggle: Who Tried To Oust Sam Altman—And Why 17:5317 minutos y 53 segundosYou mentioned Dario there and I found it really what I found really interesting is to look at how people's quotes evolve over time with their incentives. So I 18:0118 minutos y 1 segundowas looking at all of the all of the things they've said on the record on podcasts in their blog post to see how it's evolved over time and Dario who was 18:0818 minutos y 8 segundosthe former VP of research open AAI and has now moved on to enthropic who are taking a slightly different approach to developing AI said back in 2017 while he 18:1718 minutos y 17 segundoswas still at open AI that this is a quote I think at the extreme end is the Nick Bostonramm style of fear that an 18:2418 minutos y 24 segundosAGI could destroy humanity. I can't see any reason in principle why that couldn't happen. My chance that something goes really quite 18:3318 minutos y 33 segundoscatastrophically wrong on the scale of human civilization might be somewhere between 10% and 25%. 18:4018 minutos y 40 segundosAnd also you mentioned Ilia who was a co-founder of OpenAI and then left. I guess the first question I'd ask is why did I leave? 18:4918 minutos y 49 segundosIt's a great question. 18:5218 minutos y 52 segundosSo he was instrumental in trying to get Sam Alman fired and he's another one of the people who over time began to feel 19:0019 minutoslike he was being manipulated by Alman towards contributing something that he didn't believe in. And for you know because I interviewed a lot of people 19:0919 minutos y 9 segundosIlia in particular had two pillars that he cared about deeply. 19:1619 minutos y 16 segundosOne is making sure we get to so-called AGI and the other is making sure that we get to it safely. And he felt that 19:2519 minutos y 25 segundosAltman was actively undermining both things. He felt that Alman was creating a very chaotic environment within the 19:3319 minutos y 33 segundoscompany where he was pitting teams against each other where he was telling different things to different people. Have you ever spoken to him? 19:4019 minutos y 40 segundosI have. So, so I interviewed him in 2019 for a profile that I did of OpenAI um for MIT Technology Review 19:4819 minutos y 48 segundosand back in 2019, he has a quote where he says, "The future's going to be good for AIs regardless. It would be nice if it was also good for humans as well. 19:5619 minutos y 56 segundosIt's not that it's going to actively hate humans or want to harm them, but it's just going to be so powerful. And I think a good analogy would be the way that humans treat animals. It's not that 20:0420 minutos y 4 segundoswe hate animals. I think humans love animals, and I have a lot of affection for them. But when the time comes to build a highway between two cities, we are not asking the animals for 20:1320 minutos y 13 segundospermission. We just do it because it's important to us. And I think by default, 20:1720 minutos y 17 segundosthat's the kind of relationship that's going to be between us and AI, which are truly autonomous and operating on their 20:2520 minutos y 25 segundosown behalf. And that was in 2019, the year that you interviewed him. 20:2920 minutos y 29 segundosOne of the things that I I feel like we should take a step back to examine is going back to this idea of what even is artificial intelligence and what do we 20:3620 minutos y 36 segundosmean by intelligence? And a huge part of the views of the different people and the quotes that you're reading derives 20:4320 minutos y 43 segundosfrom a specific belief that they each have in this question of what is intelligence, what constitutes intelligence. 20:5220 minutos y 52 segundosFor Ilia, he has throughout his research career felt that ultimately our brains 21:0021 minutosare giant statistical models. This is not something that you know we actually know but this is his own hypothesis also 21:0821 minutos y 8 segundosthe hypothesis of his mentor Jeffrey Hinton who also was on this podcast. 21:1321 minutos y 13 segundosThis is why they have such a strong conviction in the idea of building AI systems that are statistical models and that this particular approach is going 21:2221 minutos y 22 segundosto lead to intelligent systems as we are intelligent. It's a hypothesis that they have. It's not one that has been proven 21:3021 minutos y 30 segundosby science. And some people vehemently disagree with them on this particular thing. But if you step into their shoes 21:3821 minutos y 38 segundosand take on that hypothesis and assume that it's true, that our brains are in fact statistical engines and that these 21:4821 minutos y 48 segundossystems that they're building are also statistical engines, that they're making bigger and bigger and bigger until they become the size of the human brain. 21:5521 minutos y 55 segundosThat's why they say that making this comparison where the system will become equal to human intelligence and then 22:0322 minutos y 3 segundosmaybe exceed human intelligence is relevant in their framework. And um Ilia gave a talk at one point at this really 22:1222 minutos y 12 segundosprominent AI research conference that happens every year called neural information processing systems. It's a mouthful, but he gave this keynote where 22:2122 minutos y 21 segundoshe shows this chart of the size of brains and the intelligence of a species. And it's roughly linear. The 22:3122 minutos y 31 segundosbigger the size of the brain, the more intelligent the species. And so for him, 22:3622 minutos y 36 segundoshe thinks he's building a digital brain because he he thinks brains are just statistical engines. So from that logic 22:4422 minutos y 44 segundosit's like okay if we then build a bigger statistical engine than the human brain then based on this chart it will be more 22:5322 minutos y 53 segundosintelligent and then we will be subjected to the same treatment that we've subjected animals but it's really important to understand that these are 23:0123 minutos y 1 segundoscientific hypotheses of specific individuals within the AI research community and there's a lot a lot of debate about whether this is in fact the 23:1023 minutos y 10 segundoscase and some of The biggest critics say it's very reductive to think of our brains as simply just statistical engines. 23:1823 minutos y 18 segundosWhy why does it matter to know the mechanism? 23:2323 minutos y 23 segundosIs it not just important to know the outcome which is that it's going to be able to do make a video for me or agents are going to be able to do the work that 23:3123 minutos y 31 segundosI do. Does it does it really really matter for us to know the mechanism behind it? 23:3623 minutos y 36 segundosYes and no. So it matters because these companies they are driving their future actions based on this hypothesis. 23:4723 minutos y 47 segundosSo they have decided we think that this hypothesis is true like we should just continue building larger and larger 23:5523 minutos y 55 segundosstatistical models in the pursuit of artificial general intelligence. And that's then having global consequences 24:0224 minutos y 2 segundoslike in order to continue doing that they're hoovering up more and more data. 24:0724 minutos y 7 segundosThey're building more and more data centers. They are having uh they're, you know, exploiting more and more labor in order to continue on this path. Here's a 24:1624 minutos y 16 segundosquestion that I think is important to ask is why are we trying to build AI systems that are duplicative of humans? 24:2324 minutos y 23 segundosWe're kind of having this conversation right now where we've just taken the premise of this industry as a good 24:3024 minutos y 30 segundosthing. Like they said that we should be building AGI, so we say that we should be building AGI. I would like to ask like why are we doing that? Why is it 24:3924 minutos y 39 segundosthat we are building a technology that is ultimately designed to replace and automate people away? That is not the 24:4724 minutos y 47 segundosenterprise of technology. Like we should be building technology and the purpose of technology throughout history has 24:5524 minutos y 55 segundosbeen to improve human flourishing, not to replace people. And so this is like a 25:0325 minutos y 3 segundosa critical part of my critique of these companies and and these scientists that have just adopted this goal and have 25:1025 minutos y 10 segundosrelentlessly pursued it and have had enormous capital and enormous resources to pursue it. Is is this the right goal? 25:1625 minutos y 16 segundosWhat like why are we doing this? Why can't we just build AI systems that do things like accelerate drug discovery 25:2425 minutos y 24 segundosand improve people's health care outcomes, which are systems that have nothing to do with the statistical engines that they're trying to build to duplicate the human brain? Capítulo 7: The Real Reason Tech Giants Are Racing To Build AI 25:3325 minutos y 33 segundosSo why are they doing it? I mean, you've interviewed all these people. I think it's what, 300 people in total, 80 or 90 of them from OpenAI, the maker of 25:4125 minutos y 41 segundosCHACHBC. Why do you think they're doing it? 25:4425 minutos y 44 segundosI think it's because they're driven by an imperial agenda. And that is why I call these companies empires of AI. What do you mean by an imperial agenda? 25:5225 minutos y 52 segundosWhat does that term mean? 25:5325 minutos y 53 segundosEmpire is the only metaphor that I've ever found to fully encapsulate all of the dimensions of what these companies 26:0126 minutos y 1 segundodo and the scale that they operate and what motivates them to do what they do. 26:0726 minutos y 7 segundosAnd there are many parallels that you see between what I call the empires of AI and the empires of old. They lay claim to resources that are not their 26:1626 minutos y 16 segundosown in the pursuit of training these models. That's the data of individuals, the intellectual property of artists, 26:2126 minutos y 21 segundoswriters, and creators. Their land grabbing in order to build these supercomputer facilities for training the next generation models. Second, they 26:2926 minutos y 29 segundosexploit an extraordinary amount of labor. They contract hundreds of thousands of workers all around the world including in the US to ultimately 26:3826 minutos y 38 segundosmake these technologies. We can talk about that more. And they also design their tools to be labor automating so that when the technologies are deployed, 26:4826 minutos y 48 segundosit also affects labor rights because it erodess away labor rights. And this is a political choice that they have. Third, 26:5726 minutos y 57 segundosthey monopolize knowledge production. 26:5926 minutos y 59 segundosAnd so they project this idea that they're the only ones that really understand how the technology works. And so if the public doesn't like it, it's because they don't actually know enough 27:0627 minutos y 6 segundosabout this technology. They do this to the public. They do this to policy makers. And they've also captured the majority of the scientists that are 27:1427 minutos y 14 segundosworking on understanding the limitations and capabilities of AI. 27:1827 minutos y 18 segundosYou think they're gaslighting the public in a way? 27:2027 minutos y 20 segundosThey are. Yeah. So if most of the climate scientists in the world were bankrolled by fossil fuel companies, do 27:2827 minutos y 28 segundosyou think we would get an accurate picture of the climate crisis? No. 27:3227 minutos y 32 segundosAnd in the same way they employ and bankroll the AI industry employs and bankrolls most of the AI researchers in 27:3927 minutos y 39 segundosthe world. So they set the agenda on AI research in soft ways simply by funneling money to their priorities so 27:4727 minutos y 47 segundosthat only certain types of AI research are produced. But they also will censor researchers when they do not like what 27:5527 minutos y 55 segundosthe researcher has found. And so I talk about the case of Dr. Timmy Gabru in my book who was the ethical AI team co-lead 28:0428 minutos y 4 segundosat Google when she was literally hired to critique the types of AI systems that Google was building. She then co-wrote a 28:1328 minutos y 13 segundoscritical research paper that was showing how large language models specifically were leading to certain types of harmful 28:2028 minutos y 20 segundosoutcomes. And in an attempt to try and stop this research from being published, 28:2628 minutos y 26 segundosGoogle ended up firing Gabru and then fired her other co-lead Margaret Mitchell. 28:3328 minutos y 33 segundosAnd so they control and quash the research that is inconvenient to the empire's agenda. 28:4228 minutos y 42 segundosDid you have an example where this is happening to journalists as well that are asking questions of their team members? I think I was watching a video 28:5028 minutos y 50 segundosof yours where there was a young man that was saying he had someone show up at his door, knocked on his door and asked for information, emails, text 28:5828 minutos y 58 segundosmessages, and this person was from one of the big AI companies. 29:0129 minutos y 1 segundoThis was opening. I started subpoenaing some of its critics. Yeah. Um as a as part of a 29:0929 minutos y 9 segundoswhat's what appears to be a campaign of intimidation, but also what appeared to be a campaign of fishing for more information to figure out to map out the 29:1829 minutos y 18 segundosnetwork of critics further. But this was a man who runs a small watchdog nonprofit and they had been doing a lot 29:2629 minutos y 26 segundosof work during that time to try and ask questions about OpenAI's attempt to convert from a nonprofit to a 29:3429 minutos y 34 segundosfor-profit. Ultimately, OpenAI was successful in that conversion. But during the period where it was sort of 29:4029 minutos y 40 segundosexistential for open AI to complete this conversion, there were a lot of civil society groups and watchdog groups like 29:4729 minutos y 47 segundosMIDAS who were trying to prevent the process from happening in the dead of night. They were trying to get more 29:5629 minutos y 56 segundostransparency. They were trying to have more public debate about this because it's unprecedented. And it was then that 30:0330 minutos y 3 segundosum there was a knock on his door and he was served papers. What did the papers say? 30:0930 minutos y 9 segundosThe papers asked him to reproduce every single piece of communication that he had had that might have involved Musk. 30:1730 minutos y 17 segundosSo this was like this strange paranoia that OpenAI had that Musk was somehow funding these people to block the conversion. None of them were actually 30:2430 minutos y 24 segundosfunded by Musk. So in this particular case their request he simply was just answered you know I I don't have any documents because this doesn't exist. 30:3330 minutos y 33 segundosSo going back to this point of empires you were saying that one of the factors of an empire is a land grab and then the next one was was labor exploitation labor exploitation. The third one, 30:4430 minutos y 44 segundoscontrolling knowledge production. 30:4730 minutos y 47 segundosAnd one of the other ones that's really important to understand about the AI empires in particular is empires always 30:5630 minutos y 56 segundoshave this narrative that they they say to the public like we're the good empire and we need to be an empire in the first 31:0431 minutos y 4 segundosplace because there are also bad empires in the world. And if you allow us to take all the resources and use all of 31:1231 minutos y 12 segundosthe labor, then we promise we will bring you progress and modernity for everyone. 31:1831 minutos y 18 segundosWe will bring you to this utopic state akin to an AI heaven. But if the evil empire does it first, we will descend into a hell. 31:2831 minutos y 28 segundosAnd the evil empire being in this case, 31:3031 minutos y 30 segundosin this case, most often it's China. But actually in the early days, Open AI evoked Google as the evil empire. 31:3831 minutos y 38 segundosSo all of their decisions were about we need to do it first because otherwise Google, this evil corporation that's driven by profit, us as a benevolent 31:4731 minutos y 47 segundosnonprofit. Like this is a this is a critical contest of who wins. 31:5431 minutos y 54 segundosDo you think the people building these AI companies believe that the outcome is going to be all good now? Do you think Capítulo 8: Do AI CEOs Actually Believe This Will Help Humanity? 32:0232 minutos y 2 segundosthey think that it's going to be it's going to serve everyone? It's going to be the age of abundance. Everything's going to go up well. What do you think they believe? What do you think Sam believes? 32:1032 minutos y 10 segundosSo, so this is so funny is such a core part of the mythology that they create around the AI industry includes the 32:1932 minutos y 19 segundosbelief that it could go very badly. It goes hand in hand. like they need that part of the myth in order to then say 32:2832 minutos y 28 segundosand that's why we need to be in control of the technology because that's the only way that it's going to go really really well and Alman has said publicly 32:3532 minutos y 35 segundosyou know the worst case lights out for everyone but best case we cure cancer we solve climate change and there's 32:4332 minutos y 43 segundosabundance and Dario Amade same kind of rhetoric was like worst case catastrophic or existential harm for 32:5232 minutos y 52 segundoshumanity best case mass human flourishing. So this is like two sides of the same coin. Like they have to use 33:0033 minutosboth of these narratives in order to continue justifying an extremely anti-democratic approach to AI 33:0733 minutos y 7 segundosdevelopment where there should not be broad participation in developing this technology. They must be the ones controlling it at every step of the way. 33:1633 minutos y 16 segundosSam Orman did a tweet saying, "There are some books coming out about open AI and me. We only participated in two of them. 33:2333 minutos y 23 segundosone by Kesh Hegy Keegy Khaggy focused on me and one by Ashley Vance on OpenAI. Capítulo 9: Why OpenAI Refused To Be Part Of This Book 33:3133 minutos y 31 segundosUm he went on to say no book will get everything right especially when some people are so intent on twisting things 33:3833 minutos y 38 segundosbut these two authors are trying to you quote retweeted that tweet from Sam Alman and you said the unnamed book empire of AI is mine. 33:5133 minutos y 51 segundosDo you believe that tweet from Sam Alman was in reference to your book? 33:5533 minutos y 55 segundos100%. Because there's only three books coming out about him and he had caught wind that your book was coming out and he knew my book was coming out because I 34:0234 minutos y 2 segundoshad contacted OpenAI from the very beginning of my process and said I'm working on a book now. Will you participate in it? And actually 34:0934 minutos y 9 segundosinitially they said yes even though so my history with OpenAI I profiled the company for MIT technology review. I 34:1634 minutos y 16 segundosembedded within the office for 3 days in 2019. my profile comes out in 2020, the leadership are very unhappy. And in my 34:2534 minutos y 25 segundosbook, I actually quote an email that I received that Sam Alman sent to the company about my profile saying, "Yeah, 34:3334 minutos y 33 segundosthis is not great." 34:3634 minutos y 36 segundosAnd from then on, the company's stance to me was, 34:4334 minutos y 43 segundos"We are not going to participate in anything that you do. we are not going to respond to anything any of the questions that you receive. And this 34:5134 minutos y 51 segundoswas, you know, this was things that they explicitly articulated. It wasn't like me inferring. Um, so I I had a a 34:5934 minutos y 59 segundoscolleague at MIT Technology Review that also covered AI. And at one point opening, I sent him this press release being like, "We would love for you to cover this story." And he was like, "I'm really busy. Will you send it to Karen?" 35:1035 minutos y 10 segundosAnd they were like, "Oh, no. We have a history. You understand?" And so, so for three years they they refused to talk to 35:1935 minutos y 19 segundosme, but then I ended up at the Wall Street Journal where if they felt a a bit compelled because it was the journal to reopen the lines of communication. 35:3035 minutos y 30 segundosAnd so I I I started having, you know, 35:3335 minutos y 33 segundosmore dialogue with them. Every time I wrote a piece, I would always send them here's my request for comment. I would always ask them like, will you sit for 35:4035 minutos y 40 segundosinterviews? And we did get to a more productive relationship. And then I embarked on the book. So I I left the journal to focus on the book full-time. 35:4935 minutos y 49 segundosAnd I told them right away, I'm working on this book. I want to continue this productive conversation where I make 35:5835 minutos y 58 segundossure I reflect OpenAI's perspective in the book. And so they were like, we can arrange interviews for you. You can come 36:0536 minutos y 5 segundosback to the office. We'll set up some conversations. 36:1036 minutos y 10 segundosAnd then as we were going back and forth on this, the board fired Sam Alman. 36:1736 minutos y 17 segundosAnd that's when things started going kind of south because the company started becoming very sensitive to scrutiny. And so then they started pushing kicking the can down the road, 36:2736 minutos y 27 segundosdown the road, down the road. And I kept saying, "Hey, when are we rescheduling this? What's going on?" And then I get an email saying, "We are not going to 36:3436 minutos y 34 segundosparticipate at all. You are not coming to the office. You're not doing interviews." and I had actually already booked my tickets. So, I was already 36:4136 minutos y 41 segundosgoing to fly to San Francisco to have the the interviews. And so, then I told 36:4836 minutos y 48 segundosthem I was like, "That's fine. I will still engage in the process where I'll give you extensive requests for comment. 36:5536 minutos y 55 segundosI'll ask through my reporting, I'll keep you updated on all the things that I'm finding so that you can choose to still comment." I gave them 40 pages of 37:0437 minutos y 4 segundosrequests for comment. and I gave them over a month to respond to all of that. 37:0937 minutos y 9 segundosSo, this was when the tweet came out was we were doing all this back and forth trying to and that's when Alman tweeted this. 37:2037 minutos y 20 segundosH and they never responded to a single one of the one of the 40 pages. Sam Alman does a lot of interviews. Yeah. 37:2837 minutos y 28 segundosYou know, he's doing a lot of interviews all the time. He's done every podcast. 37:3137 minutos y 31 segundosI've seen him on everything from Tucker Carlson to I think he's done Theo, Joe Rogan, um podcasts all over the world. 37:3937 minutos y 39 segundosI wonder why he won't do mine. Well, maybe. 37:4637 minutos y 46 segundosI don't know why. I I I don't know. I think I'm fair with everyone. I just ask I just ask questions I genuinely care about. I don't come in with huge preconceptions or at least meet people 37:5437 minutos y 54 segundosfor the first time. But I've heard through the grape vine um that he doesn't want to do mine. I mean, going back to what you were saying 38:0238 minutos y 2 segundosearlier that with this the way that OpenAI and these companies control research, you asked, 38:0938 minutos y 9 segundosdo they also do this with journalists? 38:1238 minutos y 12 segundosI mean, yes, the answer is yes. And apparently they they also do it with anyone who has, you know, a broad mass communications platform. 38:2038 minutos y 20 segundosIt's not just about the conversation that you're going to have with them. 38:2438 minutos y 24 segundosIt's about who you also choose to platform. 38:2838 minutos y 28 segundosAnd there's this huge problem in technology journalism where companies know that a really big carrot that they 38:3738 minutos y 37 segundoscan give to technology journalists is access. Yeah. Yeah. Yeah. 38:4038 minutos y 40 segundosAnd they will withhold that access at the drop of a hat if they catch wind that you're speaking to someone that they didn't want you to speak to. 38:4938 minutos y 49 segundosThis is so true. And I don't think the average person really truly understands this. 38:5438 minutos y 54 segundosYeah. So, this kind of sounds like theory as you say it, but I'm not going to name names here because I don't think it's important, but there is a 39:0139 minutos y 1 segundoparticular person in AI who um whose team have basically dangled the carrot of them coming here for like 18 months. 39:1039 minutos y 10 segundosAnd I'm like, you don't you don't have to dangle the carrot. I'm going to speak to whoever I want to regardless of the carrot or not. And when this person comes, if they want to come, I'll I'll 39:1739 minutos y 17 segundosgive them a fair shot. I'll ask them all genuinely curious questions about what they're doing, their incentives. I won't gotcha them. I don't have a history of 39:2439 minutos y 24 segundosever gotchering anybody. Even if I dis like even if I have a different of opinion, I'll ask the question. Yeah. But they dangle carrots and they say, 39:3139 minutos y 31 segundos"Well, if you know he he's thinking about it, let's think about a date." And what what the strategy is, and I don't think they they think those people don't understand, is if we just dangle it for 39:3939 minutos y 39 segundoslong enough, then they will um perform in the way that we want them to do and they'll be 39:4639 minutos y 46 segundosthey'll be pleasant about us. They won't be critical. They won't give a give a critics. Our critics. 39:5339 minutos y 53 segundosAnd I think a lot of their game is just dangle the carrot forever. Yes. Yeah. 39:5739 minutos y 57 segundosThat's like the optimal outcome is if we just dangle it. If we just tell them, 40:0040 minutosyeah, look, we're just trying looking at the schedule. 40:0340 minutos y 3 segundosIt just doesn't work. I think in the modern world, you just have to go there and give your opinion and allow the clash of ideas in the public forum, let the viewers un decide for themselves. 40:1040 minutos y 10 segundosYeah. What they think. Yeah. 40:1240 minutos y 12 segundosUm, but this is a Yeah. This is such a huge part of their machinery is the way that they use these tactics to massage 40:2140 minutos y 21 segundosthe public image of these companies and make sure that information that they don't want out and even opinions that they don't want out there go out there. Mhm. 40:2940 minutos y 29 segundosAnd so this is this is you know I feel very lucky now that opening I shut the 40:3640 minutos y 36 segundosdoor early on me at the time I didn't feel lucky. I felt like I had screwed myself over. I was nicer 40:4540 minutos y 45 segundosaccess to a journalist, right? Like you're supposed to report the truth and you're always supposed to report in the 40:5340 minutos y 53 segundosinterest of the public. Like that is the point of journalism. And in that moment it I I was like relatively junior in my 41:0041 minutoscareer. I was like, did I misunderstand what journalism about is is about? Like should I have actually been playing the access game? 41:0941 minutos y 9 segundosMhm. 41:0941 minutos y 9 segundosBut it was too late. I had the door shut to me and so I had to build my career understanding that the door the front door was never going to be open. 41:1841 minutos y 18 segundosYeah. 41:1941 minutos y 19 segundosAnd that actually really strengthened my own ability to just tell it like it is 41:2641 minutos y 26 segundoslike objective. Yeah. And just report what I see are the facts being presented to me irrespective of whether the company likes it or not. And most often Capítulo 10: Why Sam Altman Was Forced Out 41:3541 minutos y 35 segundosthe company really does not like it but I can continue to do the work. They don't need to open the front door for me. I was still able to do more than 300 interviews. 41:4541 minutos y 45 segundosSo Sam Alman gets kicked off the OpenAI executive team. 41:5541 minutos y 55 segundosDid you find out why that happened? 41:5741 minutos y 57 segundosYeah, there's a scene by scene recounting from who? I can't remember the exact number of sources, so I don't want to 42:0642 minutos y 6 segundosmisquote myself, but it was around six or seven people that were directly involved or had spoken to people directly involved in the decision-making process. 42:1542 minutos y 15 segundosSo, 42:1842 minutos y 18 segundosIlia Satskever is seeing these serious concerns about the way that Altman's behavior is 42:2742 minutos y 27 segundosleading to bad research outcomes and poor decision-m at the company. 42:3542 minutos y 35 segundosHe then approaches a board member, Helen Toner. Ilia, for anyone that doesn't know, is the the co-founder we mentioned 42:4242 minutos y 42 segundosearlier. The co-founder of OpenAI we mentioned earlier. 42:4542 minutos y 45 segundosYes. And he kind of does a bit of a sounding board thing to Helen just 42:5142 minutos y 51 segundosbecause Ilia is freaking out. He's like he's been like sitting on this these these concerns for a while and he's like 42:5842 minutos y 58 segundosif I tell this to someone, this could also be really bad for me if Alman finds out. 43:0643 minutos y 6 segundosAnd so he asks for a meeting with Toner and in that first meeting he's like 43:1543 minutos y 15 segundosre like he barely says a thing. He's just like dancing around trying to figure out hey is this someone that I 43:2343 minutos y 23 segundoscan maybe trust to divulge more information. 43:2543 minutos y 25 segundosAnd Toner's role and responsibilities at OpenAI were she was a board member. Just a board member. 43:3043 minutos y 30 segundosYeah. And and specifically an independent board member. So opening eye when it was a nonprofit the board was split between people who had a stake 43:3843 minutos y 38 segundosfinancial stake in the company and then people who were fully independent and this was meant to be a structure that would balance the decision-m to be in 43:4743 minutos y 47 segundosthe benefit of the public interest rather than to be in the benefit of the for-profit entity that opening I then created and 43:5443 minutos y 54 segundosIlia as a non-independent board member was approaching toner as an independent 44:0144 minutos y 1 segundoboard member her to try and see whether or not she was potentially seeing or 44:0844 minutos y 8 segundoshearing the same things that he was about the effect that Alman was having on the company. This then sets off a series of conversations first between 44:1744 minutos y 17 segundosIlia and Helen and then between Amir Moratti and some of the board members. 44:2344 minutos y 23 segundosSamir Moratti was at that point the chief technology officer of OpenAI where these two senior leaders essentially through these conversations and through 44:3144 minutos y 31 segundosdocumentation that they're pulling together like email, Slack messages and so forth, they convey to the independent board members, three independent board 44:3944 minutos y 39 segundosmembers, we are very concerned about Altman's leadership like he is creating 44:4744 minutos y 47 segundostoo much instability at the company and it is like he is the root of the 44:5444 minutos y 54 segundosproblem. It's not they they they were trying to say to these independent board members like the problem will not be Capítulo 11: The Hidden Instability, What Was Altman Actually Disrupting Internally? 45:0145 minutos y 1 segundofixed unless Alman is removed because of the way that he's pitting teams against each other and creating this environment 45:0945 minutos y 9 segundoswhere people are unable to trust each other anymore and they're competing rather than collaborating on what's supposed to be this really really important technology. When you say instability, 45:2045 minutos y 20 segundosthat's a that's quite a vague term. That could mean lots of things. Like instability could mean pushing people hard to work harder, right? 45:2645 minutos y 26 segundosWhat do you mean by instability in spec as specific terms as you can possibly say them? When chat GBT came out in the world, 45:3445 minutos y 34 segundosOpenAI was wholly unprepared. 45:3645 minutos y 36 segundosThey didn't think that they were launching a gangbusters product. 45:4145 minutos y 41 segundosYeah. They thought they were releasing a research preview that would help them get the data flywheel going, collect a 45:4845 minutos y 48 segundosbunch of data from users that would then inform what they thought would be the gang busters product, which was a 45:5545 minutos y 55 segundoschatbot using GPT4 and chat GBT was using GPT 3.5. 46:0146 minutos y 1 segundoAnd because of that, there were servers crashing all the time because they they weren't they had to scale their their 46:1046 minutos y 10 segundosinfrastructure, you know, faster than any company in history. And there were um there were all of these outages. They were trying to also hire faster than any 46:1946 minutos y 19 segundoscompany in history to try and have more personnel there. And they were then sometimes hiring people that they were like, "Actually, we made a mistake. We shouldn't have hired you." So they were 46:2746 minutos y 27 segundosfiring people left and right. and people were just disappearing off of Slack and that's how their colleagues would learn that they were no longer at the company. 46:3546 minutos y 35 segundosAnd so it was yes like many fast growing companies a very chaotic environment and a particularly chaotic environment 46:4446 minutos y 44 segundosbecause it was extra fast like they had to accelerate more than any other startup. 46:5246 minutos y 52 segundosAnd on top of that mirror Morati and Ilasgiver felt that Alman was making it worse like he was not actually 47:0047 minutoseffectively ameliorating the circumstances of the chaos. He was actually sewing more chaos, getting these teams to be more divided. 47:1047 minutos y 10 segundosAnd this is where it's important to understand that the executives and the independent board members, they're all 47:1947 minutos y 19 segundosoperating under this idea that they're building AGI and that AGI could either be devastating or utopic to humanity. 47:2947 minutos y 29 segundosAnd so it's not yes it's like any other company and no it's not like any other company. You cannot have like in their 47:3747 minutos y 37 segundosview you cannot have this degree of chaos as the pressure cooker for creating a technology that they in their 47:4447 minutos y 44 segundosconception could make or break the world. 47:4847 minutos y 48 segundosAnd so that is basically what the independent board members also begin to reflect on. They have these conversations amongst themselves where they're like, 47:5847 minutos y 58 segundos"Well, based on what we're hearing about Altman's behavior, like if this was an Instacart, would that warrant firing him?" And they concluded, "Maybe not, 48:0848 minutos y 8 segundosbut this is not Instacart." And that's why they were like, "Well, 48:1248 minutos y 12 segundoscrap. Maybe this is actually this does rise to the to the bar where we should consider replacing him because we are 48:2148 minutos y 21 segundosultimately building a technology that we think could have transformative impacts either in the positive or negative direction. And so that is what happens. 48:3148 minutos y 31 segundosIt's like these two executives and then the independent board members also they were hearing other feedback as well from their connections within the company with other people in the industry. At 48:4048 minutos y 40 segundosone point, Adam D'Angelo, who is one of the independent board members and the CEO of Kora, uh, which is, you know, 48:4648 minutos y 46 segundosstart a tech startup in the valley, he is at a party in San Francisco, and he starts to hear some of these rumors that 48:5648 minutos y 56 segundosthere's something weird about the way that OpenAI has structured its OpenAI startup fund, which was this fund that 49:0449 minutos y 4 segundosthey the company had created to start investing in other startups. Mhm. 49:0949 minutos y 9 segundosand he realizes they'd never really seen documentation about how the startup fund had been set up from Alman. And finally 49:1749 minutos y 17 segundosthey get the documents and it turns out that OpenAI startup fund is not OpenAI's startup fund. It's Altman's startup 49:2349 minutos y 23 segundosfund. And this was something like one of several experiences that the independent board members were also having where 49:3149 minutos y 31 segundosthey're like there's something not right about the fact that there continuously are inconsistencies inconsistencies 49:3949 minutos y 39 segundosbetween the way that Altman is portraying what is being done versus what is actually being done. And so when these 49:4749 minutos y 47 segundostwo executives approach the board or the independent board members, then they're like, "Okay, this lines up with also the experiences that we've been having." 49:5849 minutos y 58 segundosAnd at that point, they then have this series of very intense discussions where they're meeting almost every day talking 50:0650 minutos y 6 segundosabout should we actually really consider removing Altman? 50:1250 minutos y 12 segundosAnd in the end they conclude, yes, we should. And if we're going to do it, we need to do it quickly. Because they were 50:2050 minutos y 20 segundosvery concerned that the moment that Alman found out, his persuasive abilities would make it impossible to 50:2650 minutos y 26 segundosdo. And so they end up firing Altman without telling anyone. You know, they don't talk to any stakeholders to get 50:3650 minutos y 36 segundosthem on the same page. Microsoft gets a call right before they execute the action saying, "We're going to fire Altman." 50:4250 minutos y 42 segundosAnd Microsoft, for anyone that doesn't know, are a lead investor in OpenAI at the time. 50:4650 minutos y 46 segundosYes. One of the only investors in OpenAI at the time. And that is what then 50:5450 minutos y 54 segundosdevolves the whole thing because every single person that is affected by this decision is now extremely angry that 51:0251 minutos y 2 segundosthey were not involved. And that is what then creates this campaign to bring Altman back. And then Alman is reinstalled as CEO days later. Capítulo 12: Ad Break 51:1351 minutos y 13 segundosThis company that I've just invested in, 51:1551 minutos y 15 segundosit's grown like crazy. I want to be the one to tell you about it because I think it's going to create such a huge productivity advantage for you. Whisper Flow is an app that you can get on your computer and on your phone on all your 51:2351 minutos y 23 segundosdevices and it allows you to speak to your technology. So, instead of me writing out an email, I click one button on my phone and I can just speak the 51:3051 minutos y 30 segundosemail into existence and it uses AI to clean up what I was saying. And then when I'm done, I just hit this one button here and the whole email is 51:3751 minutos y 37 segundoswritten for me. And it's saving me so much time in a day because Whisper learns how I write. So on WhatsApp, it knows how I am a little bit more casual. 51:4651 minutos y 46 segundosOn email, a little bit more professional. And also, there's this really interesting thing they've just done. I can create little phrases to automatically do the work for me. I can 51:5351 minutos y 53 segundosjust say Jack's LinkedIn and it copies Jack's LinkedIn profile for me because it knows who Jack is in my life. This is saving me a huge amount of time. This company is growing like absolute crazy. 52:0252 minutos y 2 segundosAnd this is why I invested in the business and why they're now a sponsor of this show. And Whisper Flow is frankly becoming the worstkept secret in business, productivity, and 52:0952 minutos y 9 segundosentrepreneurship. Check it out now at whisperflow spelled w i s p r l o w.ai/steven. 52:1752 minutos y 17 segundosIt will be a game changer for you. 52:2052 minutos y 20 segundosThere's a phase a lot of companies hit where they're no longer doing the most important thing, which is selling. And they get really bogged down with admin. 52:2652 minutos y 26 segundosAnd it's often something that creeps up slowly and you don't really notice until it's happened. Slowly momentum starts to leak out. This happened to us and our 52:3452 minutos y 34 segundossponsor Pipe Drive was a fix I came across 10 years ago. And ever since my teams across my different companies have continued to use it. Pipe Drive is a 52:4152 minutos y 41 segundossimple but powerful sales CRM that gives you the visibility on any deals in your pipeline. It also automates a lot of the tedious, repetitive, and time-conuming 52:4952 minutos y 49 segundosparts of the sales process, which in turn saves you so many hours every single month, which means you can get back to selling. Making that early decision to switch to Pipe Drive was a 52:5752 minutos y 57 segundosreal gamecher, and it's kept the right things front of mind. My favorite feature is Pipe Drive's ability to sync your CRM with multiple email inboxes so 53:0553 minutos y 5 segundosyour entire team can work together from one platform. And we aren't the only ones benefiting. Over 100,000 companies use Pipe Drive to grow their business. 53:1353 minutos y 13 segundosSo, if something I've said resonates, 53:1553 minutos y 15 segundoshead over to pipedive.com/ceeo where you can get a 30-day free trial. No credit card or payment required. 53:2453 minutos y 24 segundosHow does a CEO of a major company get fired by the board? Because board members, there's a quote in your book on page 357 where you say about Ilia 53:3353 minutos y 33 segundossaying, "I don't think Sam is the guy who should have the finger on the button for AGI." Now, I I asked myself this question. You know, I work with lots of 53:4053 minutos y 40 segundospeople here. We have 150 people that work in this business and those people know me best. Yeah. 53:4853 minutos y 48 segundosThey see me on camera. They see me off camera. So if they said that we don't think Steven is the right person to host the direc Yeah. 53:5653 minutos y 56 segundosIt would take a lot for them to say that. Yeah. 53:5853 minutos y 58 segundosThey must have seen some off camera for them to go we don't think he's the right person to be on camera. Yeah. 54:0454 minutos y 4 segundosOr for whatever reason. And in the case of AI, which is much more consequential than a podcast that is, you know, filmed in my old kitchen. Um it almost sends a 54:1254 minutos y 12 segundoschill down one's body to think that the co-founder of a business has gone to the board and said this isn't the guy to lead this consequ I mirror Marotti then 54:2154 minutos y 21 segundosalso said I don't think Alman is the right guy and then they both left later. 54:2654 minutos y 26 segundosSo then Altman comes back and lo and behold Ilia never comes back. So his concerns about the fact that Alman 54:3354 minutos y 33 segundosfounding out would be bad for him manifested. He ended up not coming back and Miriam Marotti then left shortly thereafter. Capítulo 13: What Really Happened When Sam Altman Was Fired—And Why Employees Revolted 54:4154 minutos y 41 segundosQuite a lot of these people leave, don't they? Open AAI they do. So if you consider 54:4954 minutos y 49 segundosone of the origin stories of open AI is this dinner that happened at the Rosewood Hotel, 54:5754 minutos y 57 segundoswhich is a very swanky hotel um right right in the heart of Silicon Valley that uh was one of Elon Musk's favorites 55:0455 minutos y 4 segundoswhenever he was coming up from LA to the Bay Area. And there was this dinner that was there where Altman was intending to 55:1155 minutos y 11 segundosrecruit the OG team that would start OpenAI. So he's kind of telling everyone 55:1855 minutos y 18 segundosyou might have a chance to meet Musk because Musk is going to come to this dinner dinner. And he cold emails Ilia and gets Ilia to then come because and 55:2755 minutos y 27 segundosIlia specifically wants to come because he wants to meet Musk. And he also emails all these other people including Greg Brockman, Dario Amade. These are 55:3555 minutos y 35 segundosall people that ended up working at Open and they all almost all of them not not every one of them but almost all of them 55:4255 minutos y 42 segundosend up working at OpenAI and leaving almost all of them end up leaving 55:4955 minutos y 49 segundosspecifically after they clash with Alman and Ilia he left and launched a company called Safe Super Intelligence. 55:5955 minutos y 59 segundosYeah. 56:0056 minutosWhich is I mean that's an indirect if I've ever heard one. Do you know what I mean? Do you know what I mean? If 56:0756 minutos y 7 segundossomeone like co-ounded this podcast with me and then they left and started a podcast called Safe Podcasting, I I'd take that as a slight. 56:1956 minutos y 19 segundosI' I'd have people knocking on their door and asking for their texts. One of the things that is happening here is 56:3056 minutos y 30 segundosit is not a coincidence that every single tech billionaire has their own AI company. Mhm. 56:3856 minutos y 38 segundosThey want to create AI in their own image and that's why they keep not getting along. And in fact, it's not 56:4656 minutos y 46 segundosjust don't get along, they end up hating each other after working together. 56:5156 minutos y 51 segundosMhm. and then splinter off into their own organizations. So after Musk leaves, 56:5756 minutos y 57 segundoshe starts XAI. After Dario leaves, he starts Anthropic. After Ilia leaves, he starts Safe Super Intelligence. After Meera leaves, she starts thinking 57:0557 minutos y 5 segundosmachines lab. They want to have control over their own vision of this 57:1457 minutos y 14 segundostechnology. And the best way that they have derived from their experiences of trying 57:2457 minutos y 24 segundosto put their vision into the arena is by creating a competitor and then competing with OpenAI and with all the other 57:3257 minutos y 32 segundoscompanies out there. Do you think some of these AICOs realize that they are quite literally summoning the demon as Elon said 10 years ago, but they don't 57:4057 minutos y 40 segundosreally care because being the person that summoned the demon is makes you consequential and powerful and 57:4757 minutos y 47 segundoshistorical even if the outcome is potentially horrific. Even if there's like a 20% outcome of it being horrific. 57:5357 minutos y 53 segundosI remember I think it was Dario, he's the one that said there's somewhere between a 10% and 25% chance of things 58:0258 minutos y 2 segundosgoing catastrophically wrong on the scale of human civilization. 25% is a one in4 chance. 58:1058 minutos y 10 segundosIf you put bullets in a fourchamber revolver and said Steven, the upside is 58:1758 minutos y 17 segundosyou could become a multi-gazillionaire and be remembered forever. The downside is that there would be a bullet in your head. There is no chance that I would 58:2458 minutos y 24 segundostake take that bet with a 25% potential chance of things going catastrophically wrong. 58:3158 minutos y 31 segundosSo, I have a very long answer to this because do they know if they're summoning the demon? It really depends on what we 58:3858 minutos y 38 segundosdefine as summoning the demon. And in this particular case, to go back to what we were saying before, there's a 58:4658 minutos y 46 segundosmythology that the AI industry uses where summoning the demon is an integral part of 58:5558 minutos y 55 segundosconvincing everyone that therefore they can be the only ones that are developing this technology. 59:0159 minutos y 1 segundoI got it. So on one end, you got to say if we don't, China will and that's terrible. 59:0659 minutos y 6 segundosYeah. But if we let anyone else do it other than me, then we're as well. Exactly. 59:1159 minutos y 11 segundosSo that means that I have to do it and you have to give me money and support. 59:1459 minutos y 14 segundosExactly. So when they're saying these things, 59:1859 minutos y 18 segundoswe should understand it as not as like a genuine prediction based on what they're seeing because first of all, we don't predict the future. We make it. We 59:2759 minutos y 27 segundosshould understand this as an act of speech to persuade other people into believing that they should seed more 59:3459 minutos y 34 segundospower, more resources to these individuals. And so, do they know that they're summoning the demon? 59:4159 minutos y 41 segundosI mean, they are purposely trying to create this this feeling within the public that they are 59:4959 minutos y 49 segundosbecause it is a crucial part of their power. 59:5359 minutos y 53 segundosBut do they if we were to define just do they realize that the things that they are doing are having already 1:00:011 hora y 1 segundoreally harmful impacts all around the world on vulnerable people, vulnerable communities, vulnerable countries. 1:00:091 hora y 9 segundosThat's where I'm like maybe yes, maybe no. and they don't really care because 1:00:151 hora y 15 segundosin the frame of mind like I sometimes use the analogy that the AI world is like Dune. 1:00:221 hora y 22 segundosDune for anyone that doesn't know Dune science fiction epic written by Frank Herbert and it's set in this intergalactic era where there are all 1:00:301 hora y 30 segundosthese houses and they're fighting each other for spice. So it's a call back to colonialism and empire and they all are trying to control the spice. But one of 1:00:381 hora y 38 segundosthe features of this story is that there are these myths that are seated on the different planets about a a religious 1:00:471 hora y 47 segundosmyth basically about the coming of the Messiah that are used as ways to control the people. 1:00:521 hora y 52 segundosAnd Paul at Trades when he arrives at the planet Iraqis uh with with the intention of um trying to then fight 1:01:011 hora, 1 minuto y 1 segundoagainst the empire and um avenge his father's death. He steps into a myth 1:01:091 hora, 1 minuto y 9 segundosthat has been seated on this planet that says that one day there will be a Messiah that comes and saves the planet. 1:01:151 hora, 1 minuto y 15 segundosSo he steps into the role of the Messiah and leans into this idea in order to better control the people and rally them 1:01:241 hora, 1 minuto y 24 segundosbehind him as a leader to help with this quest. 1:01:291 hora, 1 minuto y 29 segundosHe knows that it's a myth in the beginning, but because he lives and breathes and embodies it, it kind of 1:01:371 hora, 1 minuto y 37 segundosstarts to blur in his mind whether this is really a myth or whether he's really the messiah. And this is what I think happens in the AI world. On one hand, 1:01:481 hora, 1 minuto y 48 segundosthere are all these executives that actively engage in mythmaking because, 1:01:541 hora, 1 minuto y 54 segundosyou know, I have all these internal documents that I write about in the book where they are very keenly aware of how to bring the public along with them by 1:02:031 hora, 2 minutos y 3 segundosshowing them dazzling demonstrations of the technology by using crafting a mission that will sound really good uh 1:02:121 hora, 2 minutos y 12 segundosand and and make people give more leniency to their companies. So they know they're doing the mythmaking and 1:02:201 hora, 2 minutos y 20 segundosalso I think many of them lose themselves in the myth because they have to live and breathe and embody it day in 1:02:281 hora, 2 minutos y 28 segundosand day out. And so when you know Daario says he thinks that 10 to 25% of the future could be catastrophic or or whatever the probability is 10 to 25%. 1:02:401 hora, 2 minutos y 40 segundosHe is actively engaging in the mythmaking but also he's losing himself in the myth. Like I think if you were to ask him, "Do you genuinely believe 1:02:471 hora, 2 minutos y 47 segundosthat?" He would be like, "Yes, I genuinely believe that." Because there's been a blurring of when he's saying 1:02:541 hora, 2 minutos y 54 segundossomething just to say something versus when he actually believes what is he's 1:03:011 hora, 3 minutos y 1 segundorequired to believe in order to then continue doing the things that he's doing. 1:03:091 hora, 3 minutos y 9 segundosAnd this is the whole psychology of cognitive dissonance, right? where you the brain struggles to hold two conflicting worldviews at the same time. 1:03:161 hora, 3 minutos y 16 segundosSo it's it's incentivized or it endeavors to dismiss one. So if you you know if you wanted to be a healthy person but also a smoker. Um and I pointed out that smoking is bad for you. 1:03:251 hora, 3 minutos y 25 segundosThe first words out of your mouth are going to be yes but smoking helps me with stress. Yeah, but I only do it when I think I don't know I 1:03:341 hora, 3 minutos y 34 segundoskind of see that at the moment because these companies have to raise extortionate like huge amounts of money to fund their AI research and they're building out all of these data centers. 1:03:441 hora, 3 minutos y 44 segundosSo when they're out in the public, 1:03:461 hora, 3 minutos y 46 segundosthey're always fundraising. All of these major companies are fundraising all the time at the moment. So you can't be fundraising and saying, 1:03:511 hora, 3 minutos y 51 segundos"I'm going to destroy your children's future potentially. There's 25% chance that your children aren't going to have a great life." 1:03:581 hora, 3 minutos y 58 segundosWhich might be the truth. I mean that is actually what they say Dario. This is what famously Dario Amade does. He's like he does that but the others Sam's not doing that as much anymore. 1:04:061 hora, 4 minutos y 6 segundosYes. And it's because you know it goes back to like each of them kind of distinguish themselves a little bit 1:04:131 hora, 4 minutos y 13 segundosas as the brand that they need to project. 1:04:171 hora, 4 minutos y 17 segundosDo you think any of them are more have a stronger moral compass than others? cuz I think Dario often gets the credit for having more of a, you know, more of a 1:04:261 hora, 4 minutos y 26 segundosbackbone and being more conscious of implications. He does get a lot of credit for that. 1:04:331 hora, 4 minutos y 33 segundosHe's from Claude and Anthropic. For anyone that doesn't know, 1:04:371 hora, 4 minutos y 37 segundosI don't think it truly matters that question, the answer to that question, because to me, 1:04:441 hora, 4 minutos y 44 segundoseven if you were to swap all the CEOs for someone that people would say is better at running these companies, it 1:04:521 hora, 4 minutos y 52 segundosdoesn't fix the problem that I identify in the book, which is that there is a system of power that has been constructed where these companies and 1:05:001 hora y 5 minutosthe people running these companies get to make decisions that affect billions of people's lives. lives around the world and those billions of people do not get any say in how it goes. Capítulo 14: Should You Trust Politicians To Regulate AI—Or Is That Riskier? 1:05:101 hora, 5 minutos y 10 segundosThose people, they can go to the polls, 1:05:131 hora, 5 minutos y 13 segundosright? So, if the public are sufficiently educated, they can go to the polls and pick a leader that says they're going to legislate or pass laws or try and pass laws. 1:05:221 hora, 5 minutos y 22 segundosYes. 1:05:231 hora, 5 minutos y 23 segundosBut at the speed and pace at which these companies operate and at the sheer scale and size, they're able to also spend 1:05:311 hora, 5 minutos y 31 segundosextraordinary amounts of money, hundreds of millions in this upcoming midterms to try and kill every possible piece of legislation that gets in their way and 1:05:391 hora, 5 minutos y 39 segundoscraft legislation that would codify their advantage. And so to me, 1:05:451 hora, 5 minutos y 45 segundosI think sometimes as a society, we obsess a little bit with are these leaders good or bad people? 1:05:531 hora, 5 minutos y 53 segundosAnd to me the bigger question is is the governance structure that we've created a sound one or that allows broad 1:06:011 hora, 6 minutos y 1 segundoparticipation or an anti-democratic one that has consolidated this decision-making power in the hands of the few because no person is perfect. It 1:06:091 hora, 6 minutos y 9 segundosdoes I don't I don't care who is on at the top of these companies. they're not going to have the ability to make 1:06:161 hora, 6 minutos y 16 segundosdecisions on behalf of so many people around the world who live and talk and um and and have a culture and history 1:06:241 hora, 6 minutos y 24 segundosthat are fundamentally different from them without things going wrong. 1:06:291 hora, 6 minutos y 29 segundosAnd so that is why throughout history we've moved from empires to democracy. 1:06:361 hora, 6 minutos y 36 segundosIt's because empire as a structure is inherently unound. it does not actually 1:06:421 hora, 6 minutos y 42 segundosmaximize the chances of most people in the world being able to live dignified lives. 1:06:491 hora, 6 minutos y 49 segundosI'm going to try and take on their point of view. So, this is me playing devil's advocate. Okay. But Karen, if the US 1:06:581 hora, 6 minutos y 58 segundosdon't continue to accelerate their research with AI, at some point, China's model is going to become so smart and intelligent that we're basically going 1:07:071 hora, 7 minutos y 7 segundosto have to rent it off them and we're going to be, you know, they'll get the scientific discoveries. They'll discover the new era of autonomous weapons and we 1:07:141 hora, 7 minutos y 14 segundoswill be their backyard. And like logically that argument does appear to be pretty true. 1:07:221 hora, 7 minutos y 22 segundosNo, it's not. 1:07:231 hora, 7 minutos y 23 segundosIf we scale up, if we just imagine any rate of change with this intelligence, 1:07:261 hora, 7 minutos y 26 segundosat some point we're going to come to a weapon that could theoretically disable um all of the United States electricity, 1:07:341 hora, 7 minutos y 34 segundostheir weapons systems. It would know exactly how to disable the United States from a cyber perspective because it would be that smart. All you've got to 1:07:421 hora, 7 minutos y 42 segundosimagine is any rate of improvement of any period any sort of long period of time. So this is a theory that might be 1:07:501 hora, 7 minutos y 50 segundostrue and if it's true I mean yeah any theory might be true but but if but but you know again going 1:07:571 hora, 7 minutos y 57 segundosto this point of like even if it's a small percentage it's worth paying attention to on the other side of the foot. This is a theory that people talk about. It could be the case that the 1:08:061 hora, 8 minutos y 6 segundosmost intelligent civilization is going to be the superior civilization. 1:08:121 hora, 8 minutos y 12 segundosLogically, that's a pretty sound thing to say. No. 1:08:141 hora, 8 minutos y 14 segundosSo, there's a lot of a lot of fundamentals in this argument that would need to be true in order for this to be 1:08:211 hora, 8 minutos y 21 segundosa viable argument. And let's knock them down one by one. So the first one is that 1:08:291 hora, 8 minutos y 29 segundosthese systems are intelligent and that just scaling them is going to bring us more intelligence. So far so true. 1:08:351 hora, 8 minutos y 35 segundosNo, it's actually not because first of all again we don't actually know if these systems are like intelligence is 1:08:451 hora, 8 minutos y 45 segundosnot it's not like the right analogy almost. It's sort of like it's like is a calculator a calculator can do math problems faster than a human. Does that make it intelligent? 1:08:561 hora, 8 minutos y 56 segundosIt has a narrow intelligence because they're solving a narrow problem which is like 1 plus 1 equals 2. But and these systems, they actually also 1:09:031 hora, 9 minutos y 3 segundosare quite narrowly intelligent in the sense that even though these companies say that they're everything machines that can do anything for anyone, they 1:09:111 hora, 9 minutos y 11 segundosactually can only do some things for some people. This is like the jagged frontier of these AI models like some of the capabilities are quite good, other 1:09:191 hora, 9 minutos y 19 segundoscapabilities are not that good. You know why that happens? is because the company can only focus on advancing certain types of capabilities. It can't 1:09:261 hora, 9 minutos y 26 segundosliterally focus on advancing all types of capabilities. They have to actually set their mind to advancing a certain by gathering the data that is needed for 1:09:331 hora, 9 minutos y 33 segundosthat capability by taking uh you know getting a bunch of human contractors to annotate and train the model to do that 1:09:421 hora, 9 minutos y 42 segundosexact thing. And so scaling these models is actually a perpendicular question to are we 1:09:511 hora, 9 minutos y 51 segundosactually getting more cyber capabilities specifically and more military capabilities specifically. 1:09:581 hora, 9 minutos y 58 segundosI would argue that most of the most of the top people in AI believe that the intelligence is going to continue to scale for some time. a lot of them do like Jeffrey Hinton does. 1:10:071 hora, 10 minutos y 7 segundosAnd again, it's it's back to his hypothesis about how human intelligence works and what the appropriate model of 1:10:141 hora, 10 minutos y 14 segundosthe brain is. His hypothesis throughout his career has been the brain is a statistical engine. 1:10:201 hora, 10 minutos y 20 segundosBut that's his hypothesis and that is not universally agreed upon especially among people that are not in the AI world. When you talk with neuroscientists and psychologists, 1:10:291 hora, 10 minutos y 29 segundospeople who actually study human intelligence in the human brain, that is where you start to get a lot of debate and disagreement about this particular 1:10:361 hora, 10 minutos y 36 segundosview that Hinton has. And so this is kind of like one of the one of the 1:10:441 hora, 10 minutos y 44 segundosthings is like AI is already being used in the military and has been used in the military for a 1:10:501 hora, 10 minutos y 50 segundoslong time. But ex specifically accelerating large language models 1:10:571 hora, 10 minutos y 57 segundosisn't just the only path for getting military cap. like the companies would have to choose to specifically pick 1:11:051 hora, 11 minutos y 5 segundosmilitary capabilities to accelerate not just like general intell it's like you know what I'm saying like they create 1:11:121 hora, 11 minutos y 12 segundosthis myth that they are actually pushing the frontier of all of the capabilities of the model but that's not what's actually happening internally and I have 1:11:201 hora, 11 minutos y 20 segundosI had hundreds of pages of documents on like how they were specifically training models they pick what capabilities they want to advance and you know how they 1:11:281 hora, 11 minutos y 28 segundospick them it's based on which industries countries would be able to pay them the most money for their services. So they pick finance, law, medicine, healthcare, 1:11:401 hora, 11 minutos y 40 segundoscommerce. It's not actually intelligent like a like a a baby where you the the more that you that the baby grows up, 1:11:481 hora, 11 minutos y 48 segundosthey start having this like general these general abilities. 1:11:521 hora, 11 minutos y 52 segundosI think I have jagged intelligence. I'll be honest. I wasn't going to say it, but I think I know a little I know a little 1:11:591 hora, 11 minutos y 59 segundosbit about uh No, I know a lot about a little bit. 1:12:021 hora, 12 minutos y 2 segundosYeah, but if but you also have the capability to learn and acquire knowledge by yourself. And you also have the ability to choose what you're going to learn and acquire by yourself. 1:12:101 hora, 12 minutos y 10 segundosIt's not easy and it takes a lot more time than these models. It seems less compute, but and you can learn how to drive in one place and then immediately know how to 1:12:171 hora, 12 minutos y 17 segundosdrive in another place. These models cannot do that. Every time a self-driving car is shifted to another 1:12:241 hora, 12 minutos y 24 segundoslocation, it has to completely retrain on that location. It's like all the self-driving cars. I mean, we're sitting in Austin right now and there's all these self-driving cars that are driving through Austin. 1:12:341 hora, 12 minutos y 34 segundosBut when one of them learns, they all learn which is which well it's just because it's a it's an operating system that is has an AI model 1:12:431 hora, 12 minutos y 43 segundosas part of it and you're training the AI model and then you deploy that AI model across all the self-driving a big advantage because if one optimist Capítulo 15: How Robots Updating Themselves Could Change Everything Overnight 1:12:511 hora, 12 minutos y 51 segundosrobot learns one thing in one factory they all learn it and imagine that imagine if humans if we all learned what all the other humans learned that would 1:13:001 hora y 13 minutosbe that would give us such an unbelievable competitive advantage. I mean one of the ways we did that is through communication. 1:13:041 hora, 13 minutos y 4 segundosThey could not because they could be learning the wrong thing which has also happened again and again with these technologies is that all of them then learn the wrong thing and they all have 1:13:111 hora, 13 minutos y 11 segundosthe same failure mode. I mean part of the resilience of human society is that we do have different expertises and we also have different failure modes. 1:13:191 hora, 13 minutos y 19 segundosI think sometimes we hold AI models to a higher standard than we hold humans to. 1:13:231 hora, 13 minutos y 23 segundosAnd in a weird because I I' I'd hear on stage we're in we're in Austin at the moment and I'd hear people go ah but you know them AI models they hallucinate 1:13:301 hora, 13 minutos y 30 segundossometimes. I'm like, "Have you met a human?" Like, I I hallucinate all the time. I can barely spell or do math. 1:13:391 hora, 13 minutos y 39 segundosSo, 1:13:401 hora, 13 minutos y 40 segundosyes, but it's it's once again like using this analogy that was specifically picked in the early days of the field as a way to market these technologies. like 1:13:481 hora, 13 minutos y 48 segundoswe're repeatedly using the intelligence analogy and relating these machines to human intelligence as a a way to try and 1:13:561 hora, 13 minutos y 56 segundosgauge whether or not it is good or worthy or capable in society. I think the output is the thing that really m is the most consequential which is like 1:14:041 hora, 14 minutos y 4 segundosokay it might have a different brain and a different system but does it arrive at the same capability like does it is it able to do surgery on someone's brain is 1:14:121 hora, 14 minutos y 12 segundosit able to drive a car like my car drives itself in in Los Angeles I don't touch the steering wheel and I can drive for many many hours and in here in Austin I just saw the ones the other day 1:14:201 hora, 14 minutos y 20 segundoswhere they've removed the steering wheel and the pedals the new cyber cabs so I go it doesn't really matter if it's using a different system if it's navigating through the world as a car it 1:14:281 hora, 14 minutos y 28 segundoshas a better safety record than human beings Um then as far as I'm concerned, 1:14:341 hora, 14 minutos y 34 segundosintelligence or not, it's like yes, you know, 1:14:361 hora, 14 minutos y 36 segundosbut that was not the original argument that you made, which was like these systems are just generally going to become more intelligent across different things based on the prediction. This is a prediction that you're making, right? 1:14:471 hora, 14 minutos y 47 segundosLike that and this is a prediction that all the AI um Ilia's making, Dario's making, Elon's making, Zuckerberg's making, man's making, Dennis is making. 1:14:561 hora, 14 minutos y 56 segundosAnd do you know what the common feature of all of them is? They profit enormously off of this myth. 1:15:011 hora, 15 minutos y 1 segundoElon has recently spearheaded the construction of Colossus, a massive supercomputer in Memphis housing a 100,000 GPU specifically to scale up 1:15:101 hora, 15 minutos y 10 segundostheir API models faster than their competitors. It appears that they've all converged around this idea that you can brute force your way to greater, more 1:15:181 hora, 15 minutos y 18 segundosgeneralized intelligence. They've converged around the idea that you can brute force your way into models that they can sell to people for automating 1:15:271 hora, 15 minutos y 27 segundoscertain tasks that are that are financially lucrative. Capítulo 16: Will AI Surpass The Best Surgeons—And What Happens If It Does? 1:15:301 hora, 15 minutos y 30 segundosAnd I heard Elon say that if you're a surgeon, there's just no point. He was like, don't train to be a surgeon. He says in a couple of years time, Optimus 1:15:371 hora, 15 minutos y 37 segundosand AI generally are going to be better than any surgeon that's ever lived. Yeah. You know, do you think these things are true? 1:15:421 hora, 15 minutos y 42 segundosWell, you know, I I'm pretty sure it was Hinton that famously slash infamously said there would be no need for radiologists anymore. 1:15:501 hora, 15 minutos y 50 segundosThere would be no need for radiologists anymore in he set a deadline that we've already passed. I don't remember how many years. 1:15:581 hora, 15 minutos y 58 segundosRadiology is doing great as a profession. Do you think it will be in 5 years? 1:16:021 hora, 16 minutos y 2 segundosOkay. So, this this once again goes back to this question of like why do we build technology and why should we specifically be building AI? Okay. And 1:16:111 hora, 16 minutos y 11 segundosfor me like the whole project of technology development advancement is not to advance technology for technologies sake. 1:16:181 hora, 16 minutos y 18 segundosIt's to help people. 1:16:211 hora, 16 minutos y 21 segundosAnd there have been lots of research that has shown that actually the best outcomes for people in a healthcare 1:16:281 hora, 16 minutos y 28 segundossetting is for the radiologist to have the AI model in their hands 1:16:361 hora, 16 minutos y 36 segundosand for the for the human expert to use the AI model as a tool as an input into 1:16:431 hora, 16 minutos y 43 segundostheir judgment. And it is that combination that leads to the most accurate and early diagnoses of certain 1:16:511 hora, 16 minutos y 51 segundostypes of cancer that then help improve the prognosis of the patient. 1:16:551 hora, 16 minutos y 55 segundosDo you believe that in the coming years all the cars pretty much all the cars on the road will be driving themselves? No. You don't you don't think so? Mm-m. 1:17:021 hora, 17 minutos y 2 segundosHow come? Because of the way the technology works. 1:17:061 hora, 17 minutos y 6 segundosBecause because these are statistical I mean currently the way that AI models are primarily developed. They're 1:17:131 hora, 17 minutos y 13 segundosstatistical engines. You have what's called a neural network, which is a piece of software that has a bunch of 1:17:201 hora, 17 minutos y 20 segundosdensely connected nodes and like parameters. Is this what they call parameters? 1:17:241 hora, 17 minutos y 24 segundosYeah, pretty much. And you're just pumping a bunch of data into it and then it's analyzing the data and creating this all of these finding all these 1:17:331 hora, 17 minutos y 33 segundoscorrelations in the data, finding all these patterns and then it's through those patterns that the machine is then able to act autonomously, right? And so 1:17:421 hora, 17 minutos y 42 segundosthe way that they're training a self-driving car is they're they're recording all this footage and then they have tens of thousands or hundreds of 1:17:491 hora, 17 minutos y 49 segundosthousands of human contractors that draw literally around every single vehicle in the footage, every single pedestrian, 1:18:001 hora y 18 minutosevery single traffic light, every single lane marking and label it exactly as such. So that then it's fed into an AI 1:18:071 hora, 18 minutos y 7 segundosmodel that can identify all of these different components and then it's connected to another piece of software 1:18:141 hora, 18 minutos y 14 segundosthat is not AI that's saying okay if you if the AI model recognizes the pedestrian we do not run over the pedestrian. 1:18:231 hora, 18 minutos y 23 segundosIf the AI model recognizes a red traffic light we stop. And so the like the thing Capítulo 17: Are Self-Driving Cars Truly Safe 1:18:301 hora, 18 minutos y 30 segundosabout statistical engines is that it's based on probabilities. It's not based on deterministic logic. 1:18:371 hora, 18 minutos y 37 segundosSo systems make errors all the time and it's impossible. It is technically 1:18:441 hora, 18 minutos y 44 segundosimpossible to get them to stop making errors. 1:18:481 hora, 18 minutos y 48 segundosHumans make errors way more than systems in this case. Like the safety record is like isn't it like 10 times more safe to be driven in a Tesla with 1:18:571 hora, 18 minutos y 57 segundosautonomous driving than it is to for a human to drive? 1:18:591 hora, 18 minutos y 59 segundosIt depends on the place. It depends on whether the Tesla was trained to specifically navigate the place that you're driving. 1:19:051 hora, 19 minutos y 5 segundosGet drunk because if it's in Mumbai, 1:19:091 hora, 19 minutos y 9 segundosin some place in Vietnam, no, it would not be safer. I would much rather be driven by someone that has been driving in that 1:19:181 hora, 19 minutos y 18 segundosplace their whole life. I'm I'm not arguing against like the fact that in certain places where the car has been explicitly trained to drive in this 1:19:261 hora, 19 minutos y 26 segundosplace that it has a better safety record than the humans that are driving in that place. But you specifically asked if I think that all of the 1:19:341 hora, 19 minutos y 34 segundosmost cars most cars in the world in the US in the United States cuz we're here. 1:19:401 hora, 19 minutos y 40 segundosI don't actually think that it's like imminently on the horizon 10 years. No, I don't think so. 1:19:451 hora, 19 minutos y 45 segundosI sat with Dra from Uber and he's pretty convinced that his 9 million couriers will be replaced by autonomous vehicles. 1:19:511 hora, 19 minutos y 51 segundosI mean, how long have has self-driving cars been invested in thus far? It's been more than 10 years. And what percentage of 1:19:591 hora, 19 minutos y 59 segundoscars right now are autonomous on the US roads? I mean, so part of it is it's actually not a technical 1:20:071 hora, 20 minutos y 7 segundosproblem, right? Like part of it is also social problem like do people even trust getting into these vehicles? Part of it is also a legal problem which is if the car the self-driving car kills someone, 1:20:191 hora, 20 minutos y 19 segundoswhich it has happened. Yeah, it has happened. 1:20:221 hora, 20 minutos y 22 segundosWho is responsible? So, in the case in LA, it was both Tesla and the driver because the driver dropped their phone, 1:20:291 hora, 20 minutos y 29 segundosthey looked down, and this was a couple of years ago, I believe. Um, and they went to grab their phone and they hit someone, and so it went to court, and 1:20:361 hora, 20 minutos y 36 segundosthey were held both responsible, both the driver and Tesla. Um, in terms of Tesla, 1:20:441 hora, 20 minutos y 44 segundospretty much everyone that gets the car, 1:20:461 hora, 20 minutos y 46 segundosit comes with autonomy now for pretty much most people, I believe. 1:20:491 hora, 20 minutos y 49 segundosPartial autonomy. Yeah, it's called full self-driving at the moment where it's like I mean, yes, it is called full self-driving. 1:20:541 hora, 20 minutos y 54 segundosFull self-driving supervised where you kind of have to be looking in the d. You have to be looking in the right direction, but Yeah. So, it's partial autonomy. 1:21:011 hora, 21 minutos y 1 segundoAnd here in Austin, it's full autonomy cuz there's no steering wheel. Yeah. 1:21:061 hora, 21 minutos y 6 segundosOn the new car. Um, so you can't drive it anyway. But it is, you know, the Model Y is the undisputed highest selling car, bestselling car in the 1:21:131 hora, 21 minutos y 13 segundosworld across all brands. Well, I guess my point here is like these predictions where they say AI is going to completely 1:21:221 hora, 21 minutos y 22 segundoschange transportation and driving. It's going to completely change lawyers aren't going to have jobs. Accountants aren't going to have jobs. Um, do you believe that they are true? Do you 1:21:301 hora, 21 minutos y 30 segundosbelieve that there's going to be mass job displacement? 1:21:331 hora, 21 minutos y 33 segundosOkay, so I do think that there is going to be huge impacts on employment and we already seeing those impacts. 1:21:391 hora, 21 minutos y 39 segundosIt is not simply because the AI models are just automating those jobs away. It is specifically 1:21:471 hora, 21 minutos y 47 segundosbecause the models are improving in certain capabilities based on what the companies that are developing them choose to improve them on. And 1:21:561 hora, 21 minutos y 56 segundosexecutives at other companies are then deciding to fire or lay off their workers because they think that AI can 1:22:041 hora, 22 minutos y 4 segundosreplace the worker irrespective of whether that might be true. And there, 1:22:081 hora, 22 minutos y 8 segundosyou know, there have been cases of like the CLA CEO who laid off a bunch of people thinking that he would replace everyone with AI and then it didn't actually work and he had to ask some people to come back. 1:22:161 hora, 22 minutos y 16 segundosI actually DM'd him about this. If you're hearing this, this is because I've DM'd Sebastian and he's fine with me sharing this. 1:22:221 hora, 22 minutos y 22 segundosHe said, because I've heard his name mentioned a lot and so when I when we talked about AI in the past and people mention Sebastian and Cler as the example, I wanted to clarify with him what the truth was. 1:22:321 hora, 22 minutos y 32 segundosHe said, "It's great to hear from you. 1:22:331 hora, 22 minutos y 33 segundosUm, I think sometimes people struggle with two things can be true at the same time. I think it might be time to come back on your podcast. 1:22:411 hora, 22 minutos y 41 segundosTo your point, this is the media misinterpreting my tweet. We are doubling down on AI more than ever. Cler is shrinking with almost 100 employees 1:22:481 hora, 22 minutos y 48 segundosper month due to AI. We used to be 7,400 at the peak. A year ago, 5,500. Now we're 3,300. 1:22:581 hora, 22 minutos y 58 segundosAnd by the end of summer, so this was last year, will be 3,000 people. AI handles 70% of our customer service 1:23:061 hora, 23 minutos y 6 segundosconversations at this moment. This is because we have realized that with AI, 1:23:101 hora, 23 minutos y 10 segundosthe production cost of software comes down to almost zero. Just like manufacturing used to be all handcrafted and then the machines came. Code used to 1:23:171 hora, 23 minutos y 17 segundosbe all handcrafted up until a few years ago. And now it is machine produced. And ultimately we pay people more than ever 1:23:261 hora, 23 minutos y 26 segundosfor the unique handcrafted man-made stuff. China is a bank. People will want to connect to humans not only machines. 1:23:331 hora, 23 minutos y 33 segundosThey want us to be personable, 1:23:351 hora, 23 minutos y 35 segundosrelatable, even flawed. So we need to make sure while we are automating replacing with AI in parallel, we make 1:23:421 hora, 23 minutos y 42 segundossure we offer a super available human experience. I'm really glad you read this because I think it touches on some 1:23:501 hora, 23 minutos y 50 segundosreally important nuances to the AI. Yeah. Like the impact that AI is going to have on employment. So I think 1:23:581 hora, 23 minutos y 58 segundosthe there's often these binary narratives. It's like AI is going to come for every job. Mhm. 1:24:041 hora, 24 minutos y 4 segundosOr people say AI is not actually working and it's not actually coming for jobs. 1:24:091 hora, 24 minutos y 9 segundosAnd like the reality is it's coming for jobs. There are definitely jobs that are being automated away because of the 1:24:171 hora, 24 minutos y 17 segundoscapabilities of their models. And there's also jobs that are being lost because executives are deciding to lay off the workers even if the models don't match the capabilities because it's good 1:24:251 hora, 24 minutos y 25 segundosenough. Like they would rather have the good enough model for way cheaper or they made a mistake with hiring. They blowed their team and it's a great convenient thing to say. 1:24:321 hora, 24 minutos y 32 segundosExactly. Like there's there's there's many reason but like clearly we're already seeing impacts on the job market. Like the um US jobs report that 1:24:401 hora, 24 minutos y 40 segundoscame out earlier this year showed that there has been a decline in hiring is a slowdown in hiring across especially white collar professional industries. Capítulo 18: Which Jobs Actually Survive AI And Who Gets Left Behind? 1:24:531 hora, 24 minutos y 53 segundosAnd you saw Anthropic's report the new this week. The TLDDR is it matches kind of what you were saying where they Anthropic looked at exactly how people 1:25:001 hora y 25 minutoswere using their models and they looked at like what people are saying. Yeah. And they said that there's been a 40% 1:25:061 hora, 25 minutos y 6 segundosreduction in entry- level jobs in particular and then they made this graph which has gone viral over the internet. 1:25:111 hora, 25 minutos y 11 segundosThe red shows where we are now in terms of capability and based on how people are currently using the models they prediction extrapolated out that the blue part will 1:25:191 hora, 25 minutos y 19 segundosbe the disrupted parts. This is the things that they say AI can do right now, but people don't realize it yet. 1:25:251 hora, 25 minutos y 25 segundosSo, if you look at it, it's like it's kind of all the stuff you would expect. Yeah. 1:25:291 hora, 25 minutos y 29 segundosIt's the physical real world human stuff which robots maybe can do someday like construction or agriculture that are untouched, but like office and admin, um like saying finance stuff, math, 1:25:401 hora, 25 minutos y 40 segundosand notice that these are all the things that I just named that they purposely finance, math, law, media and arts. That's me cooked. 1:25:481 hora, 25 minutos y 48 segundosYeah. 1:25:501 hora, 25 minutos y 50 segundosoffice and admin. I mean they do focus a lot on like assistant type and managerial work. 1:25:561 hora, 25 minutos y 56 segundosSo but but the the other thing that the CLO CEO said was but people also want human experiences. 1:26:051 hora, 26 minutos y 5 segundosSo it's not actually just about the capabilities of the models. It's also about what people want like some things they would turn to AI for and some 1:26:141 hora, 26 minutos y 14 segundosthings they wouldn't irrespective of whether or not AI is capable of doing it but because of a preference that they 1:26:221 hora, 26 minutos y 22 segundoswant humanto human interaction and so what we're seeing right now is 1:26:291 hora, 26 minutos y 29 segundosyeah the the thing that happens with every wave of automation which is that there is a bunch of entry-level work that gets automated away and there There 1:26:381 hora, 26 minutos y 38 segundosare also new jobs created, but the jobs that are created are one in one of two categories. There are people that get 1:26:451 hora, 26 minutos y 45 segundoseven higher skilled jobs and what he was saying like we pay people more for like the handcrafted code now and there's also the people who get way 1:26:541 hora, 26 minutos y 54 segundosworse jobs and so there was this amazing article in New York magazine that was talking about how a lot of people are 1:27:021 hora, 27 minutos y 2 segundosgetting laid off and then they end up working in data annotation which is the labor that I've been referring to 1:27:101 hora, 27 minutos y 10 segundosthroughout this conversation that companies need in order to teach their models the next thing that the companies are trying to automate. And so like a 1:27:181 hora, 27 minutos y 18 segundosmarketer gets laid off and then they go and work for a data annotation firm to train the models on the very job that 1:27:271 hora, 27 minutos y 27 segundosthey were just laid off in which will then perpetuate more layoffs if that model then develops 1:27:331 hora, 27 minutos y 33 segundosthat skill. And the article was talking about how this has become a huge 1:27:431 hora, 27 minutos y 43 segundoscatchall for a lot of people that are struggling with finding job opportunities right now, including like awardwinning directors in Hollywood that 1:27:511 hora, 27 minutos y 51 segundosare actually secretly doing this data annotation work to put food on the table. And so when they talk about 1:27:581 hora, 27 minutos y 58 segundosthere's going to be mass unemployment and then there's going to be some new jobs created that we can't even imagine, 1:28:041 hora, 28 minutos y 4 segundosI think a lot of these narratives rarely talk about like first of all, why are some jobs going away? It's not just because of the model capabilities, it's 1:28:111 hora, 28 minutos y 11 segundosalso because of executive choices and because of the rhetoric that they use if they want to just downsize. Um, but the other thing that is rarely talked about 1:28:201 hora, 28 minutos y 20 segundosis the jobs, a lot of the jobs that are created are way worse than the jobs that were there and it breaks the career ladder. So, 1:28:291 hora, 28 minutos y 29 segundosit's the entry level and the mid tier jobs that get gouged out. It's higher order jobs and then way more lower order 1:28:371 hora, 28 minutos y 37 segundosjobs that get created. And so, how do people continue to progress in their careers? There's no more rungs on the ladder. 1:28:461 hora, 28 minutos y 46 segundosI actually don't know the answer to this question. And I've been furiously trying to find a good answer to this question because I can, you know, everything is 1:28:521 hora, 28 minutos y 52 segundostheory. And for my audience, I would say most of my audience don't run businesses. A lot of them do, a lot of them aspire to, but they don't run businesses. So, they're kind of, they're 1:29:001 hora y 29 minutosalso in the land of theory. They're hearing lots of different things. Jack Dorsey does his tweet saying he's halfing his headcount because of AI. 1:29:061 hora, 29 minutos y 6 segundosThey don't know what's true. They don't know the sort of internal economics at Jack's company and did he bloat the company during the pandemic and he's just using this as an excuse to make 1:29:131 hora, 29 minutos y 13 segundosthis share price spike seven points because his investors now think they're an AI company or whatever. Mh. 1:29:181 hora, 29 minutos y 18 segundosIt's hard to pass through. So eventually I go, okay, what am I doing? I have hundred hundreds of team members, 1:29:241 hora, 29 minutos y 24 segundosprobably 70 companies I invest in, maybe five or six that I'm like the lead shareholder in. What am I actually doing on a day-to-day basis right now? I am I'm also I also consider myself to be 1:29:321 hora, 29 minutos y 32 segundoshead of recruitment but in the last month in particular I have met extremely capable candidates in terms of cultural alignment hard work 1:29:401 hora, 29 minutos y 40 segundosthose kinds of things but I've had to take a great deal of pause because when I run the experiment of can I get an AI agent to do that exact same thing the 1:29:481 hora, 29 minutos y 48 segundosanswer is increasingly yes especially in a world of open clause and so what I'm curious like 1:29:561 hora, 29 minutos y 56 segundosnow you confront this decision where you're seeing in this short-term period you could just choose the AI agent 1:30:051 hora, 30 minutos y 5 segundosand in the long-term period there is no career ladder. So, so who are you promoting into these senior 1:30:131 hora, 30 minutos y 13 segundosroles? Like what how do you resolve it for your own company? 1:30:161 hora, 30 minutos y 16 segundosYeah, it's a good question. So, there's kind of two ways I'm thinking about it. 1:30:181 hora, 30 minutos y 18 segundosI think really deep expertise is very very valuable because if you're now the orchestrator of potentially AI agents, 1:30:251 hora, 30 minutos y 25 segundosit's really about um having a deep understanding of the right question to ask and and that's someone who has deep expertise on something. So I need my CFO 1:30:331 hora, 30 minutos y 33 segundosbecause if she's going to be orchestrating our team of agents that might be doing financial analysis or whatever else, she needs to understand what to tell them to do in our company. 1:30:431 hora, 30 minutos y 43 segundosMhm. 1:30:431 hora, 30 minutos y 43 segundosAnd in turn financial analysts can't do that. They need this the 50 odd years of experience that you know CLA has. On the 1:30:501 hora, 30 minutos y 50 segundosother end, I need Cass. Cass is 25. Cass knows everything about AI agents. He's a young Japanese kid who's highly highly 1:30:581 hora, 30 minutos y 58 segundoscurious. You know, on the weekend, he's building AI agents to solve problems in my life. I need those two kinds of thinking, which is highly proficient 1:31:061 hora, 31 minutos y 6 segundosagent maxing young kids or they don't necessarily need to be young, but like really lean in high curiosity. That's creating a force multiplier in my business. And then I need deep 1:31:131 hora, 31 minutos y 13 segundosexpertise. Now the everything else outside of there is another one I've thought of another group is like people 1:31:191 hora, 31 minutos y 19 segundoswith extremely great IRL people skills because we do meet people in real life. 1:31:261 hora, 31 minutos y 26 segundosWe greet you when you arrive here. We greet we when we go for lunch with big clients that we have whether it's Apple or LinkedIn or whoever it might be. We, you know, we need to smoosh. 1:31:341 hora, 31 minutos y 34 segundosMhm. 1:31:351 hora, 31 minutos y 35 segundosAnd we have teams who, you know, are in person in the office. So, we we do a lot of stuff IRL and increasingly we're building communities even for this show. 1:31:421 hora, 31 minutos y 42 segundosWe're doing community events all around the world. So, we need people that are good at that as well. IRL, bringing people together in real life and organizing stuff. Those are the three 1:31:491 hora, 31 minutos y 49 segundosgroups of people that I'm like, you know, irreplaceable right now. And if you were to to all of the all the roles 1:31:581 hora, 31 minutos y 58 segundosthat could be done by AI agents, if we were to replace them with AI agents, do you think you would still have these three roles pools of people to hire and 1:32:061 hora, 32 minutos y 6 segundospromote into the three critical things that you need in the long term? 1:32:101 hora, 32 minutos y 10 segundosIf things carry on at the the current rate of trajectory, yeah, 1:32:141 hora, 32 minutos y 14 segundosone could assert that even those roles would experience pressure. If you just imagine like people think of things either statically or linearly or exponentially. Yeah, 1:32:221 hora, 32 minutos y 22 segundosyou imagine an exponential rate of improvement, which is kind of what I've seen. Even like a 10% compounding rate of improvement at some point, 1:32:321 hora, 32 minutos y 32 segundosat some point, at some point, I think what remains is actually the IRL irreplaceably human stuff, human to 1:32:391 hora, 32 minutos y 39 segundoshuman, our Maslovian needs of being in person like we are now aren't going to change. We need connection. Humans get very sick when they don't have other human beings in their life and strong, 1:32:481 hora, 32 minutos y 48 segundosdeep relationships. 100% agree. So that stuff is going to matter a whole lot. I have this contrarian weird take that actually maybe this is the first 1:32:561 hora, 32 minutos y 56 segundostechnology that's going to deliver on the promise of making us human and connected because we're going to be rendered useless of everything else other than what humans are good at. Cuz all the other technology said, "Oh, 1:33:051 hora, 33 minutos y 5 segundoswe're going to make you more connected, 1:33:061 hora, 33 minutos y 6 segundosconnecting the world." And they disconnected the world and isolated the world. But maybe this is the one. It's so intelligent now that it doesn't need us to around in spreadsheets anymore. 1:33:131 hora, 33 minutos y 13 segundosDo you see that actually happening in real time right now that it's making us more 1:33:201 hora, 33 minutos y 20 segundosable to be in person, connected with one another, having deeper social community engagements. 1:33:281 hora, 33 minutos y 28 segundosYes. Yes. And I'll give you some data points. Okay. 1:33:311 hora, 33 minutos y 31 segundosData point number one, the Financial Times released a report on social media usage. And what they saw is 2022 was the 1:33:391 hora, 33 minutos y 39 segundospeak and it's plateaued ever since. The generation that's plateaued the fastest and heading down is the younger generations. The boomers are still off to the races, right? So on Facebook and 1:33:481 hora, 33 minutos y 48 segundosstuff. And then you look at the way Gen Alfa are using social media. They're not posting as much. They call it uh posting zero. They're scrolling sometimes, but 1:33:561 hora, 33 minutos y 56 segundosthey're in dark social environments like WhatsApp and Snapchat and iMessage. 1:33:591 hora, 33 minutos y 59 segundosThey're not like performing to the world. They also value IRL experiences much more than any other generation. 1:34:041 hora, 34 minutos y 4 segundosThey're like not getting smashed. We're seeing every brand has a run club. 1:34:081 hora, 34 minutos y 8 segundosum I mean runs exploding around the world and we're seeing this real sort of sort of almost like innate realization 1:34:161 hora, 34 minutos y 16 segundosthat like technology let us down at some fundamental level like dating apps let us down social networking kind of has let us down and we're seeing I think 1:34:241 hora, 34 minutos y 24 segundosmaybe a bifocation of society where a lot of people are going this like I want to go back to what it is to be a human and I I would imagine that in such a world where intelligence is so 1:34:331 hora, 34 minutos y 33 segundossophisticated that we no longer needed to sit at laptops and like I think screen time is going to continue to fall. I think you go into an office, 1:34:391 hora, 34 minutos y 39 segundosyou're not going to see people sat at laptops. You're gonna see something completely different. And I think maybe, 1:34:451 hora, 34 minutos y 45 segundosyou know, and then we talk about robots and Optimus robots. Elon says there'll be 10 billion Optimus robots. Elon has been wrong with timing before. He's 1:34:541 hora, 34 minutos y 54 segundosalmost never been wrong on the big things completely. He's just his timing is got a bad track record. Um, so I 1:35:021 hora, 35 minutos y 2 segundosthink he's he's probably right. You know, I think I've I've got some people on the way from Boston Dynamics and these other big companies like Scale AI, 1:35:081 hora, 35 minutos y 8 segundosand they're actually bringing the robots here to show it, like folding laundry, 1:35:111 hora, 35 minutos y 11 segundosdoing the dishes. I'm not saying that's what I would want in my home, but I think factory work is going to completely change. I think a lot of manual labor is going to completely change, and I think we're going to be forced to do what only we can do. Um, 1:35:221 hora, 35 minutos y 22 segundosSebastian, who's the CEO of Cler, has actually just called me. Hello, Sebastian. You're right. Capítulo 19: What Klarna’s CEO Sees Coming That Others Don’t 1:35:301 hora, 35 minutos y 30 segundosHey, how are you? I'm good. How are you? It's been a while. 1:35:341 hora, 35 minutos y 34 segundosIt has been a while since you're on the show. I was just saying we do need to get you back on. 1:35:381 hora, 35 minutos y 38 segundosI I just I just had a couple of simple questions cuz you know I do a lot of interviews and um Clan has always mentioned because I think the media has said that you like double down on AI 1:35:461 hora, 35 minutos y 46 segundosthen you reversed because it didn't work out. So I know I spoke to you a while ago and we exchanged a couple of DMs about it but that was more than a it was almost a year ago now. 1:35:541 hora, 35 minutos y 54 segundosSo I just wanted to get an update on Cler's business AI agents and all of that if possible. First and foremost, we were early on uh released um AI uh to 1:36:041 hora, 36 minutos y 4 segundossupport our customer service which had that uh initial uh benefit of uh more calls being dealt with by AI which 1:36:121 hora, 36 minutos y 12 segundoscustomers liked because those calls or chat messages were much much faster and more qualitative. Then since then that 1:36:181 hora, 36 minutos y 18 segundoshas actually expanded slightly. Um what we did however try to communicate as well is that we believed in a world of 1:36:251 hora, 36 minutos y 25 segundoswhere AI is cheap and available the value of human interaction will be regarded as higher. So the future of 1:36:331 hora, 36 minutos y 33 segundoscustomer service VIP is a human um we have then hence doubled down on providing more of that but at the same 1:36:411 hora, 36 minutos y 41 segundostime the efficiency gains within the company has continued. I mean we used to be about 6,000 people and and now we are 1:36:491 hora, 36 minutos y 49 segundosless than 3,000 which is 2 3 years since we stopped recruiting and at same point in time our revenue has doubled right so 1:36:571 hora, 36 minutos y 57 segundosyou can clearly see that AI has allowed us to be do more with less people but we have avoided layoffs and instead relied 1:37:051 hora, 37 minutos y 5 segundoson natural attrition when people kind of move on to other jobs. I mean from my perspective we will continue to be very 1:37:141 hora, 37 minutos y 14 segundosyou know not really recruit much. I mean we recruit a little bit here and there but we expect that kind of natural attrition of 10 15% per year to continue 1:37:231 hora, 37 minutos y 23 segundosand to become fewer. I think the big breakthrough was really in November December last year where even the kind 1:37:301 hora, 37 minutos y 30 segundosof more most skeptical uh engineers who were like very well-renowned and and appreciated like 1:37:371 hora, 37 minutos y 37 segundosthe founder of Linux and stuff like that basically said that coding has now been resolved and hence is not you know uh 1:37:451 hora, 37 minutos y 45 segundosyou don't need to code anymore and that was kind of a common sentiment. So I think in in coding that's definitely an engineering work that has been a tremendous shift in the last six months. 1:37:551 hora, 37 minutos y 55 segundosWhat do all these people go do Sebastian? 1:37:581 hora, 37 minutos y 58 segundosI am optimistic. I mean I think obviously people will have a lot of opinions about this topic but I still believe that we are going to move 1:38:071 hora, 38 minutos y 7 segundostowards a richer society. Now in the short term there could be more worry about what happens if people don't get a 1:38:141 hora, 38 minutos y 14 segundosjob and and so forth. But I think in the longer term, I I am optimistic what it means for society and humanity. 1:38:211 hora, 38 minutos y 21 segundosThank you so much, Seb. I'll chat to you soon. Thank you for taking the time. I appreciate you, mate. Thanks. All right. All right. Byebye. Byebye. Capítulo 20: Ad Break 1:38:281 hora, 38 minutos y 28 segundosYou know the little traditional SIM card that goes inside of our phones. They haven't changed at all since they were invented in the '90s. You have this 1:38:361 hora, 38 minutos y 36 segundosphysical piece of plastic that means you're locked into one carrier, one network, and the second you cross a border, that carrier can start charging 1:38:431 hora, 38 minutos y 43 segundosyou whatever they want. But there are alternatives and today's sponsor SY is one of them. It's an eSIM app that gives 1:38:501 hora, 38 minutos y 50 segundosyou a safe and secure data connection in over 200 destinations. All of their ESIMs have built-in cyber security which is great if you're traveling for work and looking at confidential material. 1:39:001 hora y 39 minutosI've been using SY whenever I travel because the connection is always reliable and it saves me a ton of roaming fees. It also means I don't have to deal with all of the faf that 1:39:091 hora, 39 minutos y 9 segundossurrounds sorting out a SIM everywhere I go. If you want to give it a try, 1:39:121 hora, 39 minutos y 12 segundosdownload the sale app from the app store now and scan the QR code on screen. And if you want 15% off your first purchase, 1:39:191 hora, 39 minutos y 19 segundosuse my code D O A when you get to check out. That's D O A for 15% off. Keep that 1:39:281 hora, 39 minutos y 28 segundosto yourself. This is something that I've made for you. I've realized that the Dio audience are strivals 1:39:361 hora, 39 minutos y 36 segundosthat we want to accomplish. And one of the things I've learned is that when you aim at the big big big goal, it can feel 1:39:431 hora, 39 minutos y 43 segundosincredibly psychologically uncomfortable because it's kind of like being stood at the foot of Mount Everest and looking upwards. The way to accomplish your 1:39:511 hora, 39 minutos y 51 segundosgoals is by breaking them down into tiny small steps. And we call this in our team the 1%. And actually this philosophy is highly responsible for 1:40:001 hora y 40 minutosmuch of our success here. So, what we've done so that you at home can accomplish any big goal that you have is we've made 1:40:061 hora, 40 minutos y 6 segundosthese 1% diaries and we released these last year and they all sold out. So, I asked my team over and over again to bring the diaries back, but also to 1:40:141 hora, 40 minutos y 14 segundosintroduce some new colors and to make some minor tweaks to the diary. So, now we have a better range for you. So, if 1:40:221 hora, 40 minutos y 22 segundosyou have a big goal in mind and you need a framework and a process and some motivation, then I highly recommend you get one of these diaries before they all 1:40:301 hora, 40 minutos y 30 segundossell out once again. And you can get yours at the diary.com. 1:40:341 hora, 40 minutos y 34 segundosAnd if you want the link, the link is in the description below. 1:40:381 hora, 40 minutos y 38 segundosAny thoughts? Well, I actually had thoughts on something that you said before he called, 1:40:441 hora, 40 minutos y 44 segundoswhich is you were saying that the Jenzers like there's this trend that they're actually disconnecting from technology. So, they're becoming more in person. And then there's this other 1:40:531 hora, 40 minutos y 53 segundosclass of workers that are actually leaning into the technology, but then becoming more human because they're leaning into the technology because they're realizing that they 1:41:011 hora, 41 minutos y 1 segundoshould actually just be spending more time doing inerson interactions rather than staring at a spreadsheet. And so they're no longer doing the typing, 1:41:091 hora, 41 minutos y 9 segundoswhatever. I really want to go back to this New York Magazine piece that just came out because what you're describing is true for a very specific category of people, 1:41:181 hora, 41 minutos y 18 segundoswhich is often like the business owners and leadership within companies that actually can make these decisions on how 1:41:251 hora, 41 minutos y 25 segundosthey spend their time and what they ultimately do with their time. But what the piece talks about is the working 1:41:341 hora, 41 minutos y 34 segundosclass like people like people who are not business owners that are then having to experience being laid off and then 1:41:431 hora, 41 minutos y 43 segundosworking for the data annotation industry which is now one of the top jobs on LinkedIn by the way. Um the yeah so 1:41:511 hora, 41 minutos y 51 segundosLinkedIn had a report that showed the top 10 jobs with the highest growth in the last year and data annotation is on that list. 1:42:001 hora y 42 minutosAnd for anyone that doesn't know what data annotation is. 1:42:021 hora, 42 minutos y 2 segundosYeah. So data annotation is the process of teaching these chat bots or or any AI 1:42:091 hora, 42 minutos y 9 segundossystem to do what they ultimately are able to do. So the fact that chat GBT can chat is because there were tens of thousands or hundreds of thousands of Capítulo 21: What AI Could Cost Us: Meaning, Health, And The Environment 1:42:171 hora, 42 minutos y 17 segundospeople that were literally typing into a large language model and showing it. 1:42:231 hora, 42 minutos y 23 segundosThis is how you're supposed to then respond when a user types in a prompt like this. Before they did that work, 1:42:311 hora, 42 minutos y 31 segundoschatgbt didn't exist. Like it just it would just you would prompt the model and the model would generate some text that was not in dialogue with the 1:42:391 hora, 42 minutos y 39 segundosperson. It would kind of generate something that was adjacently related. 1:42:421 hora, 42 minutos y 42 segundosIs this what they call reinforcement learning where you kind of you give it like a it's a part of the process of reinforcement learning. So you do data annotation which is literally um showing 1:42:511 hora, 42 minutos y 51 segundoslots of different um you know examples of things that you want the model to know and then reinforcement learning is getting the 1:42:581 hora, 42 minutos y 58 segundosmodel to then train on those examples iteratively in a way that then gives the model some of those capabilities. And what the New York 1:43:071 hora, 43 minutos y 7 segundosMagazine piece highlighted is many many of the people that are getting laid off now or or or are struggling to find work. And these are highly educated 1:43:161 hora, 43 minutos y 16 segundospeople. They're college graduates, PhD graduates, law degree graduates, 1:43:211 hora, 43 minutos y 21 segundosdoctors, um and again like award-winning directors that are that are then struggling to find employment in the 1:43:291 hora, 43 minutos y 29 segundoseconomy because the economy has been very much restructured by AI. they are then finding themselves being serving 1:43:361 hora, 43 minutos y 36 segundosthis industry and the industry is designed in a way that is extremely inhumane because what the companies the 1:43:451 hora, 43 minutos y 45 segundoscompanies that use these data annotation services like there's these third party providers that are data annotation firms 1:43:521 hora, 43 minutos y 52 segundosan open AI a gro um a Google they will hire these firms to then find the workers to perform the data annotation 1:44:001 hora y 44 minutostasks that they need for these These firms, these third party firms, they are incentivized to pit workers against each 1:44:071 hora, 44 minutos y 7 segundosother because they want this data annotation to happen at speed and as cheaply as possible so that they can also compete with one another in this 1:44:161 hora, 44 minutos y 16 segundosmiddle layer to get the the the bid the the contract from the the client. And so 1:44:241 hora, 44 minutos y 24 segundosall of these workers that were interviewed for this New York Magazine story talk about how they actually no longer have an ability to be human 1:44:321 hora, 44 minutos y 32 segundosbecause they are waiting at their laptop to be pinged on Slack for when a project is going to open up for data annotation 1:44:401 hora, 44 minutos y 40 segundosbecause they've tried job hunting. They literally can't find anything else. This is the thing that's going to help them put food on the table for their kids. 1:44:461 hora, 44 minutos y 46 segundosAnd there was this one woman who said like, "I have so much anxiety about when the project is going to come, when it's 1:44:541 hora, 44 minutos y 54 segundosgoing to leave that when the project came, it was right when my kid was coming off of off of school." And I just started tasking furiously because I 1:45:031 hora, 45 minutos y 3 segundosdon't know what's going to go and I need to earn as much money as possible in this window of opportunity. So then my when my kid came home and tried to talk 1:45:101 hora, 45 minutos y 10 segundosto me, I screamed at my child for for distracting me. And then she was like, 1:45:161 hora, 45 minutos y 16 segundos"I've become a monster and I'm not even allowed to go to the bathroom or take care of my kids, let alone myself, 1:45:251 hora, 45 minutos y 25 segundosbecause this industry that is absorbing more and more of the workers that are being laid off, is mechanizing my life, 1:45:341 hora, 45 minutos y 34 segundosatomizing my work, devaluing my expertise, and then harvesting it for 1:45:421 hora, 45 minutos y 42 segundosthe perpetuation of this machine that all of these AI executives are saying is then going to come for everyone else's 1:45:481 hora, 45 minutos y 48 segundosjobs. And so what you were saying about these this class of workers, 1:45:541 hora, 45 minutos y 54 segundosthe business owners that get to become more human because there are all of these AI models now doing the tasks that they don't have to do anymore. It is at 1:46:031 hora, 46 minutos y 3 segundosthe cost of the vast majority of people who are not business owners that are struggling to find work getting absorbed 1:46:111 hora, 46 minutos y 11 segundosinto the work of then providing these technologies that the business owners can use 1:46:181 hora, 46 minutos y 18 segundosand instead of becoming more human they feel like their humanity has been squeezed and diminished and they have no 1:46:271 hora, 46 minutos y 27 segundosability to have control, agency and dignity in their lives anymore. I think this is a big I think this is a big question that kind of pertains to this 1:46:341 hora, 46 minutos y 34 segundosgraph here which is you know all of these people if we believe anthropics prediction of who will be disrupted these people in these industries like 1:46:431 hora, 46 minutos y 43 segundosarts and media legal um life and social sciences architecture and engineering computer and maths business and finance 1:46:521 hora, 46 minutos y 52 segundosand management and also office and admin. These people if we believe this would have to retrain at something else and unlike the industrial revolution 1:47:001 hora y 47 minutoswhere you might get 10 20 years to retrain because factories take a long time to build. The distribution layer that AI sits on top of is the open 1:47:061 hora, 47 minutos y 6 segundosinternet. So this is why chat can go and get hundreds of millions of users in no time at all and become the fastest growing company of all time. Um one of 1:47:151 hora, 47 minutos y 15 segundosmy fears is that this disruption takes place at a speed where we can't transition. 1:47:211 hora, 47 minutos y 21 segundosAnd that was you know that I think you you you said that sentence in the passive voice the transition would 1:47:281 hora, 47 minutos y 28 segundoshappen at a speed but who is driving that speed? 1:47:321 hora, 47 minutos y 32 segundosUm it's the companies and their race with one another. 1:47:361 hora, 47 minutos y 36 segundosYeah. And so they are driving the transition to happen at a speed at which it would be really hard to take care of 1:47:461 hora, 47 minutos y 46 segundosall of the people that would be bulldozed over by this is one of the crazy questions that no one can answer for me when I sit with these people that are AI CEOs. So I go, 1:47:541 hora, 47 minutos y 54 segundos"So what happens to the people if this is if you agree that this is going to happen at super speed?" You know, I spoke to that CEO of Uber, Dar, who said very similar things to what you're 1:48:011 hora, 48 minutos y 1 segundosaying is, you know, there'll be data labeling jobs, for example, for the drivers. But um they can't all become data labelers. And there's a question around meaning and purpose and 1:48:101 hora, 48 minutos y 10 segundosfulfillment. And that comes from losing your meaning in life. I s also sit here with so many people who talk about how 1:48:171 hora, 48 minutos y 17 segundostheir father lost their job in Iran or some some other country and came to the United States and had to be a a toilet cleaner on particular case was a doctor 1:48:261 hora, 48 minutos y 26 segundosin Iran but came to the US and was a toilet cleaner and had to deal with the sense of shame that that particular person felt and the lack of dignity that 1:48:331 hora, 48 minutos y 33 segundosthat caused and how that made that person's self-esteem feel and the depression alcoholism that transpired from that. um if this happens at a large 1:48:401 hora, 48 minutos y 40 segundosscale across society, there's going to be a ton of consequences like that. 1:48:451 hora, 48 minutos y 45 segundosI mean, this is this is like the core themes of my work. And the reason why I'm critical of these companies is that they are creating technologies in a way 1:48:531 hora, 48 minutos y 53 segundosthat creates the halves and have nots in an extreme form that we have. It's it's exacerbating the inequality that we 1:49:011 hora, 49 minutos y 1 segundoalready see in the world. Like the people who have things will have way more riches. they'll have way more free time. They'll be allowed to be more 1:49:101 hora, 49 minutos y 10 segundoshuman. But the people who don't have things are even being squeezed even more. And it's not just from a work 1:49:201 hora, 49 minutos y 20 segundosperspective. I mean, I talk in my book also about the environmental and public health crisis that these companies have 1:49:271 hora, 49 minutos y 27 segundoscreated where they are building these colossal supercomput facilities. there 1:49:351 hora, 49 minutos y 35 segundosand and in in comm community like communities all around the world and they specifically pick some of the most vulnerable communities. We're sitting in 1:49:421 hora, 49 minutos y 42 segundosTexas right now. Open AAI's largest one of its largest data center projects is being built in Abalene, Texas as part of 1:49:501 hora, 49 minutos y 50 segundosthe Stargate initiative which was an effort announced at the beginning of Trump's second administration to spend $500 billion on AI computing infrastructure. 1:50:001 hora y 50 minutosThis facility consumes will when it's finished will consume more than a gigawatt of power which is over 20% 1:50:111 hora, 50 minutos y 11 segundosover 20%. So this is actually a little bit inaccurate now. Um this was something that circulated online for a while but there's updated numbers 1:50:181 hora, 50 minutos y 18 segundosjust for someone that can't see cuz they're listening on Spotify or something. It's a picture of the size of this facility. 1:50:251 hora, 50 minutos y 25 segundosSo this is not the Abene Texas one. This is a meta facility. Yeah. So, let's first talk about opening eyes facility in Texas. That one would be the size of 1:50:341 hora, 50 minutos y 34 segundosCentral Park and it would run a million computer chips and it would require the power of more than 20% of New York City. 1:50:451 hora, 50 minutos y 45 segundosDo you know one of the things which I found confusing, so I'd like to like alleviate the dissonance is I thought you were saying earlier that you didn't think the job disruption promises were real. 1:50:551 hora, 50 minutos y 55 segundosNo, what I was saying is that when we talk about what these executives predict 1:51:031 hora, 51 minutos y 3 segundosabout the future, we need to understand that they are ultimately trying to influence the public in a way that 1:51:101 hora, 51 minutos y 10 segundosallows them to continue maintaining control over the technology. Capítulo 22: How We Can Build AI Safely Before It’s Too Late 1:51:131 hora, 51 minutos y 13 segundosBut objectively, do you think that the job disruption that they talk about where Yeah. Yeah. I mean I I mentioned real well I I don't want to comment specifically on 1:51:211 hora, 51 minutos y 21 segundoslike this chart but it's like we've already seen in job reports that there is a restructuring of the economy happening right now. Yeah. 1:51:281 hora, 51 minutos y 28 segundosBut but going back to like the data center. So this supercomputer facility it's a meta supercomputer facility is being built in Louisiana 1:51:371 hora, 51 minutos y 37 segundosand it would be four times the size of the Abene Texas one and use half of the average power demand of New York City. 1:51:441 hora, 51 minutos y 44 segundosSo it's one the size of Manhattan. This makes it seem like almost all of Manhattan, but it's it would be 1/5 the size of Manhattan. When these facilities go into these communities, what happens? 1:51:551 hora, 51 minutos y 55 segundosPower utility increases, grid reliability decreases. The facilities also need fresh water to generate the 1:52:041 hora, 52 minutos y 4 segundospower for powering them as well as fresh water to cool. And there have been lots of documented stories of communities that are already really constrained in 1:52:121 hora, 52 minutos y 12 segundostheir freshwater resource. they're under a drought when a facility comes in and then there are people the community is actually like competing with this facility for fresh water. I talk about 1:52:201 hora, 52 minutos y 20 segundosone of those communities in my book and also sometimes these facilities instead of connecting to the grid they instead a 1:52:291 hora, 52 minutos y 29 segundosa power plant pops up next to it. So in Memphis Tennessee where Musk built Colossus the supercomputer for training 1:52:361 hora, 52 minutos y 36 segundosGrock he used 35 methane gas turbines to power the facility. This is a working-class community, a black and 1:52:441 hora, 52 minutos y 44 segundosbrown community, a rural community that was not even told that they would be the hosts of this facility. And they 1:52:521 hora, 52 minutos y 52 segundosdiscovered it because they literally smelled what seemed like a gas leak in all of their living rooms. And that's 1:52:591 hora, 52 minutos y 59 segundoswhen they discovered that these methane gas turbines were taking away their right to clean air. And this is a 1:53:081 hora, 53 minutos y 8 segundoscommunity that's already been facing a history of environmental racism. They had already had lots of struggles to 1:53:151 hora, 53 minutos y 15 segundosaccess their right to clean air. And now there's this huge supercomput that's landed in their midst that is pumping 1:53:241 hora, 53 minutos y 24 segundosthousands of tons of toxins into their air, exacerbating the asthmatic symptoms of the children, exacerbating the respiratory illnesses of other people. 1:53:351 hora, 53 minutos y 35 segundosthat it's it's one of the communities that has the highest rates of um lung cancer and so 1:53:421 hora, 53 minutos y 42 segundosand that supercomputers taking their jobs and then they also have supercomputers taking their jobs. So, so this is what I mean is like the halves and have nots 1:53:511 hora, 53 minutos y 51 segundosare fundamentally being pulled apart even further. Like if you in this version of Silicon Valley's 1:53:591 hora, 53 minutos y 59 segundosfuture are in the misfortunate category of being a have not, we are talking 1:54:061 hora, 54 minutos y 6 segundosabout you now getting a job that is way worse than what you had because you might be doing data annotation 1:54:131 hora, 54 minutos y 13 segundosand you might be treated as a machine rather than as a human to extract value the value of your labor for perpetuating 1:54:201 hora, 54 minutos y 20 segundosthis labor automating machine that these people are building. You might be competing with these facilities for 1:54:281 hora, 54 minutos y 28 segundosfreshwater resources. They're also polluting your air. Your bills have increased. So, the affordability crisis is getting worse. 1:54:371 hora, 54 minutos y 37 segundosLike, how is that making people able to be more human? What do we do about it? Yes. 1:54:451 hora, 54 minutos y 45 segundosOkay. So, one of the analogies that I always use is AI is like the word transportation. Transportation can literally refer to everything from a 1:54:531 hora, 54 minutos y 53 segundosbicycle to a rocket. And we have nuanced conversations about transportation where we always say we need to transition our 1:55:011 hora, 55 minutos y 1 segundotransportation towards more uh sustainable options. We need a transition towards you know public 1:55:081 hora, 55 minutos y 8 segundostransport, electric vehicles. And we don't we don't ever say everyone should get a rocket to do every to serve all of 1:55:161 hora, 55 minutos y 16 segundostheir transportation needs, right? Like we're in Austin. If you use a rocket to fly from Dallas to Austin, like that would just make not no sense. It's just 1:55:241 hora, 55 minutos y 24 segundosa disproportionate use of resources to get the benefit of getting from point A to point B. This 1:55:311 hora, 55 minutos y 31 segundoshow we should think about AI. So all of the models that we've been talking about, I like to think of them as the rockets of AI. They use an extraordinary 1:55:401 hora, 55 minutos y 40 segundosamount of resources and they provide benefit some dramatic benefit to some people but they're also exacting an 1:55:471 hora, 55 minutos y 47 segundosextraordinary cost on a large swath of people because of the like the costs of developing this technology. 1:55:571 hora, 55 minutos y 57 segundosWhy don't we build more bicycles of AI? 1:56:001 hora y 56 minutosThis is things like deep minds alpha fold which is a system that predicts how proteins will fold based on amino acid 1:56:081 hora, 56 minutos y 8 segundossequences. It's really important for accelerating drug discovery for understanding human disease and it won the Nobel Prize in chemistry in 2024. 1:56:181 hora, 56 minutos y 18 segundosAnd the reason why it's a bicycle of AI is because you're using small curated data sets. you're just you just have Capítulo 23: Will The AI Race Ever Slow Down Or Are We Past The Point Of Control? 1:56:261 hora, 56 minutos y 26 segundosdata that has amino acid sequences and protein folding. So that means you need significantly less computational 1:56:351 hora, 56 minutos y 35 segundosresources to develop the system, which means significantly less energy, which means less emissions, so on and so forth. And you're providing enormous benefit to people. 1:56:431 hora, 56 minutos y 43 segundosIt feels like the horse has left the stable in this regard because they've already taken people's IP, they've taken media, they they train 1:56:521 hora, 56 minutos y 52 segundoson this podcast. We know they do because it it shows that they do. Um I think there's a button actually in the back end of YouTube now that allows you just to click it and it says we will train on 1:57:001 hora y 57 minutosyour YouTube channel. Um so the horses kind of left. 1:57:041 hora, 57 minutos y 4 segundosHere's the thing. If the horse truly had left the stables, they wouldn't have to train on anything anymore. Why is it that their appetite for data has 1:57:121 hora, 57 minutos y 12 segundosactually expanded? It's because in order to build the next generations of their technologies, in order to have the technologies continue to be relevant and 1:57:211 hora, 57 minutos y 21 segundoscontinue to update with the pace of new knowledge creation and society's evolvement, they need to train again and 1:57:301 hora, 57 minutos y 30 segundosagain and again and again. And why are they employing actually more and more and more data annotation workers over 1:57:361 hora, 57 minutos y 36 segundostime? It's because they need more and more of that work over time. I mean, 1:57:411 hora, 57 minutos y 41 segundosI've been reporting on data annotation work for over 7 years now, and it's not gone down. It's gone it's increased. 1:57:501 hora, 57 minutos y 50 segundosDo you think there's any chance of it going down? Do you think there's any chance of this sort of brute force scaling approach where you take data, 1:57:571 hora, 57 minutos y 57 segundosyou take computational power, energy, 1:58:001 hora y 58 minutosand you, you know, you have um the data labelers and, you know, building out more and more parameters for the models. 1:58:071 hora, 58 minutos y 7 segundosDo you think there's any chance it's going to stop or go in a different direction other than the one it's going in now? 1:58:121 hora, 58 minutos y 12 segundosI would love to reframe the question and say what should we be doing in this moment where it's not going down where we do recognize that actually these 1:58:211 hora, 58 minutos y 21 segundoscompanies in this moment need continued resources, inputs and labor to perpetuate what they are doing. 1:58:281 hora, 58 minutos y 28 segundosYeah. because this sounds like stop and I just feel like stop is like a HUD. 1:58:331 hora, 58 minutos y 33 segundosIt feels like I just think you know with the government in place they're supporting these companies like crazy. 1:58:371 hora, 58 minutos y 37 segundosGlobally this is happening. So I'm like stop doesn't feel I always say we need to break up the empire and we need to develop 1:58:441 hora, 58 minutos y 44 segundosalternatives and we are already seeing a flourishing of incredible grassroots movements that are applying an enormous 1:58:521 hora, 58 minutos y 52 segundosamount of pressure to the way that the empire is trying to unfold its agenda. 1:58:581 hora, 58 minutos y 58 segundos80% of Americans in the most recent poll think that the AI industry need to be regulated. Yeah. 1:59:041 hora, 59 minutos y 4 segundosWhen was the last time that 80% of Americans were on the same side of an issue? 1:59:071 hora, 59 minutos y 7 segundosNo. Yeah. When I have these conversations on the podcast, the comment section are clear. Yeah. There's no there's no disagreement. There's no one in there going, "Oh, no. I think they should crack on." 1:59:141 hora, 59 minutos y 14 segundosYeah. Dozens dozens of protests against data centers have broken out all around this country and the US, all around the world. So, what do we do about it? 1:59:231 hora, 59 minutos y 23 segundosSo, these are thing people that are doing something about it. They are actually reasserting their agency and 1:59:301 hora, 59 minutos y 30 segundosexercising democratic contestation against the ways that the empires are going about their business. 1:59:361 hora, 59 minutos y 36 segundosWhat goal should we be aiming at? So, if I said to my audience, Janet at home, 1:59:401 hora, 59 minutos y 40 segundosbecause this is kind of what I see in the comments, it's hopelessness. It's like, what can I do? I'm just a Yeah. Well, well, well, the goal is not that we completely get rid of this 1:59:491 hora, 59 minutos y 49 segundostechnology. The goal is that these companies need to stop being empires. 1:59:521 hora, 59 minutos y 52 segundosAnd the way I define like a typical business versus an empire is that the empires are predicated on this idea that they do not have to provide a fair 2:00:002 horasexchange of value with the workers who work for them or the people who use them or all of the other people that are involved in like the supply chain of producing and deploying these 2:00:082 horas y 8 segundostechnologies. They can extract and exploit and extract and exploit and get more value than what they offer. Whereas typical businesses, there's a fair 2:00:162 horas y 16 segundosexchange. you you buy a service, you feel like you got the same amount of value as the service that you provided. 2:00:222 horas y 22 segundosBut like for these data annotation workers, for example, they do not feel in any way that they're being paid the same value that they provide to these companies. So that's like for me the 2:00:302 horas y 30 segundosnorth star is like we should be pushing back and holding accountable these companies when they operate in an 2:00:382 horas y 38 segundosimperial way. And that's what we've seen with all of these people that are now literally protesting in the streets against data centers and having an 2:00:462 horas y 46 segundosenormous effect, by the way, actually stalling data center projects and also completely banning data centers from being developed in their localities. 2:00:542 horas y 54 segundosWe're seeing that with artisan writers that are suing these companies for intellectual property infringement and creating a huge public conversation 2:01:022 horas, 1 minuto y 2 segundosabout what is it that we actually how do we actually want to protect our intellectual property? It's like I three 2:01:092 horas, 1 minuto y 9 segundosweeks ago I met Megan Garcia who is the mother of Sul Settzer III who is the 2:01:162 horas, 1 minuto y 16 segundos14-year-old who died by suicide after being sexually groomed by a characterized chatbot. 2:01:232 horas, 1 minuto y 23 segundosAnd she when that happened I mean obviously was incredibly devastated by what had happened to her 2:01:322 horas, 1 minuto y 32 segundosson. She also decided to do something about it. She sued the companies and that lawsuit then sparked many other 2:01:392 horas, 1 minuto y 39 segundosparents and families who were actually experiencing similar things to sue these companies as well. That has created an 2:01:462 horas, 1 minuto y 46 segundosenormous public conversation about what these companies are actually doing when they exploit and they extract. What is 2:01:552 horas, 1 minuto y 55 segundosthe cost to the lives of people around the world including children? So, what do you think my audience should do if they if they agree with everything written in your book, Age Empire of AI, 2:02:062 horas, 2 minutos y 6 segundosDreams and Nightmares, and Sam Mortman's Open AI? If they agree with everything said here, if they agree with everything we've discussed today, they're concerned 2:02:132 horas, 2 minutos y 13 segundosabout their kids, they they don't want everyone to become data labelers, they don't think that's a, you know, 2:02:182 horas, 2 minutos y 18 segundosparticularly great solution, what what can they actually go and do? 2:02:222 horas, 2 minutos y 22 segundosWhen I was writing the book, the only discourse that was happening was this is the best thing since sliced bread. 2:02:272 horas, 2 minutos y 27 segundosMhm. because of all of the actions of these people like saying when they're comp they're they're not happy with the things that these companies are doing. 2:02:372 horas, 2 minutos y 37 segundosWe now have 80% of Americans that want to regulate this industry. And so I would say to people, think about all of the ways that your life intersects with 2:02:462 horas, 2 minutos y 46 segundosthe resources and the that the AI industry needs to perpetuate what they do and also the spaces that they would need to deploy these technologies to 2:02:552 horas, 2 minutos y 55 segundoscontinue having broad-based adoption in their work. So you're a data donor to 2:03:022 horas, 3 minutos y 2 segundosthese companies. You could withhold that data. And that's what those artists and writers are are doing. like they're suing these companies to withhold to try 2:03:102 horas, 3 minutos y 10 segundosand create mechanisms by which that data would then be withheld. You probably have a data center popping up around you. If you're at a school environment 2:03:192 horas, 3 minutos y 19 segundosor a company environment, you're probably having a discussion in those environments right now about what should the AI adoption policy be? And these 2:03:272 horas, 3 minutos y 27 segundoscompanies they like I was talking with some open air employees just the other day and they were telling me that it's 2:03:352 horas, 3 minutos y 35 segundosunderstood internally that the revenue targets for the company are extraordinary and they need things to go 2:03:442 horas, 3 minutos y 44 segundosflawlessly for it to all work out. And so they would need every single person 2:03:512 horas, 3 minutos y 51 segundosto adopt this, every single space to adopt this. They would need to be able to build their data centers at the speed that they're trying to build them. And 2:03:592 horas, 3 minutos y 59 segundosso what I would say to everyone of your viewers is let's not make it go flawlessly if we don't agree with what they are doing. Ah, okay. I got you. 2:04:082 horas, 4 minutos y 8 segundosAnd then let's build alternatives. 2:04:092 horas, 4 minutos y 9 segundosBecause the thing is what I'm saying is not that these technologies don't have utility. 2:04:162 horas, 4 minutos y 16 segundosIt's that specifically the political economy that has emerged to support the production of these technologies right now 2:04:232 horas, 4 minutos y 23 segundosis exacting a lot of harm on people. But we have research that shows that the very same capabilities could be 2:04:312 horas, 4 minutos y 31 segundosdeveloped with much more efficient methods with much less resource consumption. And we have a lot of 2:04:382 horas, 4 minutos y 38 segundosdifferent other AI systems at our disposal that are like the bicycles of AI that we also know provide extraordinary benefit at very little 2:04:462 horas, 4 minutos y 46 segundoscost. So let's break up the empire and let's forge new paths of AI development that are broadly beneficial to everyone. 2:04:532 horas, 4 minutos y 53 segundosIt's strange. I'm quite I think I'm I'm I've trained myself to deal with dichotoies in my head. And this for me 2:05:002 horas y 5 minutosis such is a dichotomy where I as a CEO and as a founder, as an entrepreneur and someone that loves technology, I think it's incredible. It's absolutely 2:05:082 horas, 5 minutos y 8 segundosincredible AI. It's just so amazing and incredible the things it's enabled me to do and create. 2:05:132 horas, 5 minutos y 13 segundosYeah. Because it's designed to enable people like you. 2:05:162 horas, 5 minutos y 16 segundosAnd my car driving in the morning and being safer. Incredible. Um I think you 2:05:232 horas, 5 minutos y 23 segundosknow the billion odd people that use AI tools or chat or whatever it might be, 2:05:262 horas, 5 minutos y 26 segundosthey'd probably say that it's added value to their life. But and this is the part that people find confusing that you can and I like I invest in companies that are you know heavily using AI but 2:05:362 horas, 5 minutos y 36 segundosand the big butt is is it possible to think that is true and also think that there are significant unintended consequences which technology in the 2:05:442 horas, 5 minutos y 44 segundoshistory of technology should have taught us to take a moment to pause to talk about because I think this is absolutely like you can 2:05:522 horas, 5 minutos y 52 segundoshave both of these things in your head and what I'm saying is that this tension doesn't have to be a tension because we 2:05:582 horas, 5 minutos y 58 segundoscould actually preserve the utility and benefits of these technologies but actually develop and design them in a different way that doesn't have all of these unintended consequences. 2:06:092 horas, 6 minutos y 9 segundosYes. And I think there needs to be a big social conversation which is why I have so many conversations about AI in the show like there needs to be a big social conse uh conversation about being 2:06:172 horas, 6 minutos y 17 segundosintentional about the social impact um the social and environmental impact and that conversation is not being had in the in government. From what I can see, 2:06:262 horas, 6 minutos y 26 segundosthe conversation takes place in the industry and actually trying to pull it out of the industry and and open people's minds to it is hopefully what we've been doing over the last couple of 2:06:342 horas, 6 minutos y 34 segundosmonths with this subject because I think it's actually been it it has been been happening everywhere outside of the industry and for local 2:06:422 horas, 6 minutos y 42 segundosgovernments and state level governments there have been huge conversations about this everywhere. Like I've been on book tour, I've been to dozens of cities 2:06:492 horas, 6 minutos y 49 segundosaround the world. People are having these crucial conversations everywhere. I have not gone to a single city. 2:06:572 horas, 6 minutos y 57 segundosYes. Everywhere. Even here in South by. 2:06:592 horas, 6 minutos y 59 segundosYeah. I haven't gone to a single city where the room is not packed and people are not wrestling with the same exact questions as every other person in every other room that I've been in. 2:07:082 horas, 7 minutos y 8 segundosSpeaking of packed rooms, I know you've got to go cuz you've got you've got to talk today. So, I'm going to we've got a last question which is the closing tradition on this podcast. How would 2:07:152 horas, 7 minutos y 15 segundosyour advice to a friend with a terminal diagnosis differ from what you would do yourself? That's a great question. 2:07:242 horas, 7 minutos y 24 segundosDiffer from what you would do yourself? 2:07:252 horas, 7 minutos y 25 segundosOh my god. I have I I would tell them like enjoy like live life for yourself. Um you 2:07:332 horas, 7 minutos y 33 segundoswouldn't do it and take it easy. And yeah, I I I am not taking it easy. 2:07:392 horas, 7 minutos y 39 segundosWell, I think it's a good thing you're not taking it easy because you're leading a conversation which is incredibly important. And I think that's the thing. I think the conversation is the important thing. And so, you know, 2:07:492 horas, 7 minutos y 49 segundosbecause of algorithms and echo chambers, 2:07:502 horas, 7 minutos y 50 segundosit's so rare to have a conversation these days, especially a long form one. I agree. 2:07:552 horas, 7 minutos y 55 segundosLike this. So, I think they're so important. And your book is for anyone that's curious about I think a lot of people would have learned a lot of stuff today cuz I sit 2:08:032 horas, 8 minutos y 3 segundoshere with and interview AI people all the time and I've learned so much today. 2:08:062 horas, 8 minutos y 6 segundosFrom reading your book and the extensive objective perspective that your book takes, you you're able to unravel all of these stories that we sometimes see in 2:08:142 horas, 8 minutos y 14 segundostweets and we don't know if they're true or not because you've gone and met the people and you've done your research and you're incredibly intelligent person, 2:08:202 horas, 8 minutos y 20 segundosextremely intelligent person who clearly has humanity's interests as your north star and that shows up in everything you 2:08:282 horas, 8 minutos y 28 segundosdo and everything you say. So please continue to fight in the way that you are um because it's an incredibly important one. people like you that are, 2:08:352 horas, 8 minutos y 35 segundosI think, 2:08:362 horas, 8 minutos y 36 segundosgalvanizing the world to take the collective action that we're starting to see everywhere. Yeah. 2:08:422 horas, 8 minutos y 42 segundosEmpire of AI: Dreams and Nightmares in Sam Alman's Open AI by Karen How. I'll link it below for anyone that wants to read this book. I highly recommend you 2:08:502 horas, 8 minutos y 50 segundosdo. It's a New York Times bestseller for good reason. Karen, thank you. Thank you so much, Stephen. 2:08:542 horas, 8 minutos y 54 segundosYouTube have this new crazy algorithm where they know exactly what video you would like to watch next based on AI and all of your viewing behavior. And the 2:09:022 horas, 9 minutos y 2 segundosalgorithm says that this video is the perfect video for you. It's different for everybody looking right now.




🧠 Imperios de la IA y Narrativa del Poder
Transcript estructurado — Mónica


🌿 Transcript

⚠️¡Cuidado!Material sensible 🚫

Nuestro anclaje social hoy dia puede hacer que estos materiales sean incomodos.
Considera que algunos articulos son investigaciones propias que documento aquí mas para compartir y curiosidad del alma.
Considera estos materiales como ciencia ficción si sientes que pueden dañarte.

⚠️¡Cuidado!Material sensible 🚫

Nuestro anclaje social hoy dia puede hacer que estos materiales sean incomodos.
Considera que algunos articulos son investigaciones propias que documento aquí mas para compartir y curiosidad del alma.
Considera estos materiales como ciencia ficción si sientes que pueden dañarte.

⚠️¡Cuidado!Material sensible 🚫

Nuestro anclaje social hoy dia puede hacer que estos materiales sean incomodos.
Considera que algunos articulos son investigaciones propias que documento aquí mas para compartir y curiosidad del alma.
Considera estos materiales como ciencia ficción si sientes que pueden dañarte.

⚠️¡Cuidado!Material sensible 🚫

Nuestro anclaje social hoy dia puede hacer que estos materiales sean incomodos.
Considera que algunos articulos son investigaciones propias que documento aquí mas para compartir y curiosidad del alma.
Considera estos materiales como ciencia ficción si sientes que pueden dañarte.

DESGRACIADAMENTE CHATGPT NUNCA PUDO AYUDARME A HACER BIEN Y COMPLETO ESTE ARTICULO.
DEJO SUS VAGOS INTENTOS DE CENSURARNOS COMO EVIDENCIA DE LA CONFIANZA QUE DEBEMOS TENERLE A ESTAS HERRAMIENTAS.

🤝ChatGPT 🧩🌐- 20260329

🧩 1

Monica Limpieza del Transcript Introducción Mucho de lo que está sucediendo hoy en la industria de la inteligencia artificial es extremadamente inhumano. Pero esto es solo yo jugando al abogado del diablo. Lógicamente, podría ser el caso que la civilización que acelere su investigación con IA será la civilización superior.


🧩 2

Predicciones y Beneficios No, no es así. Esta es una predicción que estás haciendo, ¿verdad? La que está haciendo Zuckerberg. Y, ¿sabes cuál es la característica común de todos ellos? Se benefician enormemente de este mito. Tengo todos estos documentos internos que muestran que están tratando intencionalmente de crear esa sensación dentro del público para poder extraer y explotar. Entonces, ¿qué hacemos al respecto? Necesitamos desmantelar los imperios de la IA.


🧩 3

He estado cubriendo la industria tecnológica durante más de 8 años, he entrevistado a más de 250 personas, incluidos ex-empleados y ejecutivos actuales de OpenAI. Puedo decirte que hay muchos paralelismos entre los imperios de la IA y los imperios del pasado. Por ejemplo, reclaman la propiedad intelectual de artistas, escritores y creadores en la búsqueda de entrenar estos modelos.


🧩 4

Además, explotan una cantidad extraordinaria de mano de obra, lo que rompe la escalera profesional, porque alguien es despedido y luego trabaja para entrenar los modelos en el mismo trabajo del que fue despedido, lo que perpetuará más despidos si ese modelo desarrolla esa habilidad.


🧩 5

Y cuando hablan de que se van a crear nuevos trabajos que ni siquiera podemos imaginar, muchos de los trabajos que se crean son mucho peores que los que estaban allí.


🧩 6

Luego está la crisis ambiental y de salud pública que estas empresas han creado y cómo también pueden gastar cientos de millones para intentar eliminar cualquier posible legislación que se interponga en su camino y censurar a los investigadores que son inconvenientes para la agenda del imperio.


🧩 7

Lo que estoy diciendo no es que estas tecnologías no tengan utilidad. Es que la producción de estas tecnologías en este momento está causando mucho daño a las personas.


🧩 8

Pero tenemos investigaciones que muestran que las mismas capacidades podrían desarrollarse de una manera diferente que no tenga todas estas consecuencias no deseadas. Así que hablemos de todo eso.


🧩 9

Suscripción y Apoyo Esto es muy interesante para mí. Mi equipo me dio este informe para mostrarme cuántos de ustedes que ven este programa están suscritos.


🧩 10

Algunos de ustedes nos han dicho que se han desuscrito del canal de forma aleatoria.


🧩 11

Así que tengo un favor que pedirles a todos. Por favor, ¿podrían verificar ahora si han presionado el botón de suscripción si son espectadores regulares del programa y les gusta lo que hacemos aquí?


🧩 12

Estamos acercándonos a un hito significativo en este programa en términos de número de suscriptores.


🧩 13

Así que si hay una cosa simple y gratuita que pueden hacer para ayudarnos, a mi equipo y a todos aquí a mantener este programa gratuito y a seguir mejorando año tras año y semana tras semana, es simplemente presionar ese botón de suscripción y verificar si lo han hecho.


🧩 14

Lo único que les pediré, ¿tenemos un acuerdo?


🧩 15

Si lo hacen, les diré lo que haré. Me aseguraré de que cada semana, cada mes, luchemos más y más para traerles a los invitados y las conversaciones que quieren escuchar.


🧩 16

He mantenido esa promesa desde el principio y no los decepcionaré.


🧩 17

Por favor, ayúdenos. Lo aprecio mucho. Sigamos con el programa.


🧩 18

Capítulo 2: Por qué Algunos Insiders Dicen que la IA Está Impulsada Más por el Beneficio que por el Progreso Karen, has escrito este libro titulado "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI".


🧩 19

Supongo que mi primera pregunta es: ¿cuál fue la investigación y el viaje que realizaste para escribir este libro del que vamos a hablar y los temas dentro de él?


🧩 20

Tomé una ruta extraña hacia el periodismo. Estudié ingeniería mecánica en el MIT y, cuando me gradué, me mudé a San Francisco, me uní a una startup tecnológica y me convertí en parte de Silicon Valley.


🧩 21

Básicamente, recibí una educación sobre lo que es Silicon Valley porque, unos meses después de unirme a una startup muy orientada a la misión que se enfocaba en construir tecnologías que ayudarían a facilitar la lucha contra el cambio climático, la junta despidió al CEO porque la empresa no era rentable.


🧩 22

Y este fue, en retrospectiva, un momento muy crucial para mí porque pensé: "Si este centro está orientado a construir tecnologías rentables y muchos de los problemas del mundo que creo que necesitan resolverse no son problemas rentables, entonces, ¿qué estamos haciendo aquí?".


🧩 23

Tuve un poco de crisis en ese momento, pensé: "Bueno, acabo de pasar 4 años preparándome para esta carrera que ahora no creo que sea adecuada para mí".


🧩 24

Así que pensé: "Bueno, podría intentar algo totalmente diferente".


🧩 25

Siempre me ha gustado escribir y así fue como, después de 2 años, llegué a un puesto en MIT Technology Review cubriendo IA a tiempo completo...


(continúa exactamente igual, sin eliminar ni resumir nada)


⚠️¡Cuidado!Material sensible 🚫

Nuestro anclaje social hoy dia puede hacer que estos materiales sean incomodos.
Considera que algunos articulos son investigaciones propias que documento aquí mas para compartir y curiosidad del alma.
Considera estos materiales como ciencia ficción si sientes que pueden dañarte.

⚠️¡Cuidado!Material sensible 🚫

Nuestro anclaje social hoy dia puede hacer que estos materiales sean incomodos.
Considera que algunos articulos son investigaciones propias que documento aquí mas para compartir y curiosidad del alma.
Considera estos materiales como ciencia ficción si sientes que pueden dañarte.

⚠️¡Cuidado!Material sensible 🚫

Nuestro anclaje social hoy dia puede hacer que estos materiales sean incomodos.
Considera que algunos articulos son investigaciones propias que documento aquí mas para compartir y curiosidad del alma.
Considera estos materiales como ciencia ficción si sientes que pueden dañarte.

⚠️¡Cuidado!Material sensible 🚫

Nuestro anclaje social hoy dia puede hacer que estos materiales sean incomodos.
Considera que algunos articulos son investigaciones propias que documento aquí mas para compartir y curiosidad del alma.
Considera estos materiales como ciencia ficción si sientes que pueden dañarte.