Tech races and regulation: what AI can learn from nuclear and the human genome project
This is my 10th ‘conversation’ focusing on the current AI race and what can be learned from other fields such as the Human Genome Project in terms of public/private partnership, and fields like nuclear in terms of international regulatory frameworks. This conversation also covers open source and how privacy regulation will need to evolve as we get closer to AGI.
Basically, how can we learn to avoid a future like the one in the image below generated in MidJourney.
🎙The interview
Who won the race to sequence the human genome between the public and private sector?
The race to sequence the human genome was a competition between a publicly funded international consortium called the Human Genome Project (HGP) and a private company called Celera Genomics. The HGP was a collaborative effort involving researchers from various institutions and countries, while Celera Genomics was led by Dr. J. Craig Venter.
In a sense, both the public and private sectors "won" the race, as they ultimately collaborated to announce the completion of a working draft of the human genome sequence in June 2000. The two groups used different methods for sequencing, with the HGP using a hierarchical shotgun approach, and Celera using a whole-genome shotgun approach.
By working together, they were able to generate a more comprehensive and accurate sequence than either would have been able to produce alone. The final, complete version of the human genome sequence was published in April 2003, marking the official end of the Human Genome Project.
When was the Human Genome Project started, who initiated its emergence and who was leading it initially. Could you in particular outline its governance setup?
The Human Genome Project (HGP) was officially launched in October 1990, but its emergence can be traced back to several earlier initiatives and discussions among scientists. A key event that initiated the project was a 1985 conference organized by the U.S. Department of Energy (DOE) at Santa Fe, New Mexico. This conference brought together leading scientists to discuss the feasibility of sequencing the entire human genome.
The HGP was an international effort, and its initial leadership included representatives from various institutions and countries. In the United States, it was jointly funded and managed by the National Institutes of Health (NIH) and the Department of Energy (DOE). Dr. James Watson, the co-discoverer of the structure of DNA, was appointed as the first director of the NIH's National Center for Human Genome Research (NCHGR), which later became the National Human Genome Research Institute (NHGRI). Dr. Watson served in this role from 1988 to 1992. Dr. Francis Collins took over as the director of the NHGRI in 1993 and led the HGP through its completion in 2003.
The governance of the HGP involved a collaborative network of research institutions, funding agencies, and scientists from around the world. Key participants included the Wellcome Trust Sanger Institute in the United Kingdom, the Max Planck Institute for Molecular Genetics in Germany, the Pasteur Institute in France, and the Baylor College of Medicine Human Genome Sequencing Center in the United States, among others.
The HGP was governed by a set of principles and guidelines that emphasized data sharing, international collaboration, and ethical considerations. One important aspect of the HGP's governance was the "Bermuda Principles," which were established in 1996. These principles called for the rapid release of DNA sequence data into public databases to ensure open access to the information for researchers worldwide.
When was Celera founded and was the sequencing of the human genome its objective from the beginning?
Celera Genomics was founded in 1998 by Dr. J. Craig Venter, who had previously been involved in the Human Genome Project. The primary objective of Celera from the beginning was to sequence the human genome using a different approach than the publicly-funded Human Genome Project.
Celera employed a whole-genome shotgun sequencing method, which aimed to greatly accelerate the process of sequencing compared to the hierarchical shotgun approach used by the HGP. By leveraging advanced sequencing technologies and computational methods, Celera aimed to produce a draft sequence of the human genome more quickly and cost-effectively than the HGP.
While the sequencing of the human genome was its main objective at the outset, Celera also had broader goals. These included leveraging genomic information to develop new pharmaceuticals, diagnostics, and other medical applications.
How much funding was made available in both initiatives until the publication of the whole human genome in 2003?
The Human Genome Project (HGP) was a publicly funded initiative with an estimated cost of around $2.7 billion (in 1991 dollars) over its 13-year duration, from 1990 to 2003. However, due to advancements in technology and efficient use of resources, the actual cost of the project was substantially lower, at around $2.3 billion in 1991 dollars or roughly $3.8 billion when adjusted for inflation.
Celera Genomics, being a private company, did not publicly disclose the exact amount of funding it received for sequencing the human genome. However, it is estimated that the company spent around $300 million on its sequencing efforts. It is important to note that Celera's funding came from a mix of private investment, corporate partnerships, and revenue generated from selling access to its genomic data.
It is worth mentioning that the overall cost of sequencing the human genome has dramatically decreased since the completion of the HGP and Celera's efforts, thanks to technological advancements and innovations in the field of genomics.
Are there any other international consortia with this level of funding which has been successfully established to solve common goals?
Yes, there have been other international consortia established with significant funding to address global challenges and achieve common goals. While it is difficult to compare them directly with the Human Genome Project in terms of their exact funding levels, some of these initiatives have garnered substantial support and resources. Here are a few examples:
1. International Thermonuclear Experimental Reactor (ITER) Project: The ITER project is a collaboration among 35 countries to build the world's largest tokamak, a magnetic fusion device designed to prove the feasibility of fusion as a large-scale, carbon-free source of energy. The project has an estimated cost of around €20 billion, with contributions from the European Union, the United States, Russia, China, India, Japan, and South Korea.
2. Large Hadron Collider (LHC) at CERN: The LHC, the world's largest and most powerful particle collider, is a multinational project situated near Geneva, Switzerland. The LHC's construction cost was approximately CHF 4.75 billion (about $4 billion in today's USD), with ongoing operational costs. The European Organization for Nuclear Research (CERN) consists of 23 member countries, and the LHC project also involves collaborations with many other nations.
3. International Space Station (ISS): The ISS is a space research facility and a collaborative project among five participating space agencies: NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (European Space Agency), and CSA (Canadian Space Agency). The estimated cost of the ISS project is over $150 billion, making it one of the most expensive scientific endeavors in history.
These examples demonstrate the potential for large-scale international collaborations to address complex scientific and technological challenges, pooling resources, knowledge, and expertise from participating nations.
Has there been any attempts to set up similar consortia in the field of AI or subfield of AI safety?
While there may not be international consortia on the scale of the Human Genome Project or ITER in the field of AI or AI safety, there are many collaborative efforts and initiatives involving researchers, institutions, and organizations across the world. These initiatives aim to advance the understanding of AI, promote safe AI development, and ensure the benefits of AI are distributed broadly. Some of these collaborative efforts include:
1. Partnership on AI (PAI): Founded in 2016 by major technology companies such as Google, Facebook, IBM, Microsoft, Amazon, and Apple, the Partnership on AI is a nonprofit organization focused on addressing the global challenges of AI and ensuring that AI technologies benefit all of humanity. PAI brings together experts from academia, industry, civil society, and policy to collaborate on best practices, research, and policies related to AI and its societal impact.
2. OpenAI: OpenAI is an AI research organization that aims to ensure that artificial general intelligence (AGI) benefits all of humanity. While initially founded as a private company, OpenAI has transitioned to a more cooperative approach, collaborating with other research and policy institutions and creating a global community to address AGI's challenges. OpenAI focuses on long-term safety research, actively cooperating with other institutions, and fostering a global community to tackle AI safety issues.
3. AI for Good Foundation: The AI for Good Foundation is an international nonprofit organization dedicated to ensuring AI benefits society and the environment. The foundation fosters collaboration among researchers, policymakers, and industry to create open-source AI solutions and drive the adoption of AI for global challenges, such as poverty, hunger, and climate change.
4. The AI Safety Research program at the Future of Humanity Institute (FHI), University of Oxford: The FHI conducts interdisciplinary research on AI safety, bringing together researchers from various fields to study the long-term safety and policy implications of artificial intelligence. They collaborate with researchers, institutions, and policymakers worldwide.
While these initiatives may not have funding on the scale of major international consortia, they represent important collaborative efforts to advance AI research and address AI safety concerns in a cooperative and global manner.
Is AI a more complex field to set up international consortia because of the widespread military applications of AI or any other known factors?
Setting up international consortia in the field of AI can indeed be more challenging compared to some other scientific fields, due to several factors, including the military applications of AI. Some of the factors contributing to this complexity are:
1. Military applications: AI has numerous potential military applications, such as autonomous weapons systems, surveillance, and intelligence analysis. The dual-use nature of AI technologies raises concerns about sharing information and research that could be used for military purposes, making international collaboration more challenging and sensitive.
2. Intellectual property and commercial interests: AI has become an essential part of many industries, and companies are investing heavily in AI research and development. As a result, there may be reluctance to openly share research findings and data in international consortia due to concerns about protecting intellectual property and maintaining a competitive advantage.
3. National security concerns: AI has the potential to impact national security in various ways, including cyber warfare, espionage, and disinformation campaigns. These concerns can create barriers to sharing research, technology, and knowledge among international partners, as countries may be hesitant to reveal sensitive information that could be exploited by adversaries.
4. Geopolitical competition: The development and deployment of AI technologies have become an area of geopolitical competition among major world powers. Countries may prioritize their national interests and strategic advantages, which could hinder collaborative efforts in AI research and development.
5. Ethical, legal, and social considerations: AI raises various ethical, legal, and social concerns, such as privacy, fairness, and accountability. Different countries and cultures may have varying perspectives on these issues, making it more challenging to establish a common framework for international collaboration.
Despite these challenges, many international initiatives and collaborations are being formed to address the broader implications of AI and ensure the technology is developed and deployed responsibly. These efforts aim to promote knowledge sharing, establish best practices, and develop policy frameworks to guide AI research and its applications for the benefit of all humanity.
Is it fair to say that there is greater awareness of the existential threat of nuclear technology compared to AI and this explains why countries have thus far been more cooperative in this field than the AI field?
It is fair to say that the existential threat of nuclear technology has been more widely recognized and has a longer history of awareness compared to AI. The destructive power of nuclear weapons and the impact of their use during World War II have led to a more immediate and tangible understanding of the risks associated with nuclear technology. This awareness has prompted countries to establish international agreements, treaties, and collaborative efforts to prevent the proliferation of nuclear weapons and ensure the safe use of nuclear energy.
In contrast, the potential existential risks associated with AI are less widely understood and may not be as immediately apparent. While AI advancements have been rapid in recent years, the potential long-term consequences of artificial general intelligence (AGI) and its transformative impact on society are still being studied and debated among experts. The risks associated with AI are more speculative, and the field is rapidly evolving, making it difficult to assess the full scope of potential threats.
However, the awareness of AI's potential risks and ethical concerns is growing among researchers, policymakers, and the general public. As a result, there has been an increase in collaborative efforts and initiatives focused on AI safety, ethics, and policy. While these efforts may not yet be as extensive or well-established as those in the nuclear field, they represent a growing recognition of the need for cooperation and responsible development in the AI domain.
It's worth noting that the nature of the risks associated with AI and nuclear technology are different, and this could also contribute to the varying levels of international cooperation. While nuclear risks are primarily centered around proliferation and accidents, AI risks span a wider range of concerns, including ethical, economic, military, and existential threats. The complexity and diversity of these risks can make it more challenging to establish a unified framework for collaboration and policy in the AI field.
When it comes to AI safety, which country or economic areas is the most advanced?
It is challenging to definitively identify a single country or economic area as the most advanced in AI safety, as advancements in this field often result from collaborative efforts among researchers and organizations across different regions. However, some countries and regions have made significant contributions to AI safety research and have established themselves as leaders in the broader AI field. Some of these include:
1. United States: The US has a strong AI research ecosystem, with world-renowned universities, research institutions, and technology companies contributing to AI safety research. Organizations such as OpenAI, the Future of Life Institute, and the Partnership on AI have been actively working on AI safety, policy, and ethics.
2. European Union: The EU has been proactive in addressing AI safety, ethics, and policy concerns. The European Commission has released guidelines for trustworthy AI and is actively working on AI regulations. Additionally, research institutions and organizations, such as the Future of Humanity Institute at the University of Oxford and the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, are conducting research on AI safety and policy.
3. Canada: Canada has been a prominent player in AI research, with strong research institutions like the University of Toronto and the Montreal Institute for Learning Algorithms (MILA) contributing to the field. The Canadian government has also released a national AI strategy, which emphasizes the importance of AI safety and ethics.
While these regions have been particularly active in AI safety research and policy, it is essential to recognize that addressing AI safety is a global challenge that requires collaboration among researchers, organizations, and policymakers worldwide. Many other countries are also making valuable contributions to AI safety and are working together to ensure the responsible development and deployment of AI technologies.
What is the ratio of publications and/or funding in AI safety vs the more general feed. If such data is not available globally, is it available in the US or any other economic region?
It is difficult to provide an exact ratio of publications or funding in AI safety compared to the broader AI field, as the categorization of research topics and funding sources can vary significantly. Additionally, the rapidly evolving landscape of AI research makes it challenging to track these metrics accurately. However, it is generally acknowledged that AI safety research is still a relatively smaller and more specialized area within the larger AI field.
Compared to more general AI research, which has experienced significant growth in both publications and funding over the past decade, AI safety research is a more niche area. Nevertheless, there has been increasing interest in AI safety, and the number of publications, conferences, and workshops focusing on this topic has been growing in recent years.
While it is difficult to provide specific numbers or ratios, the trend indicates that AI safety research is gaining attention and recognition as a critical area of study. Funding for AI safety research is also increasing, with support from private foundations, research institutions, and technology companies. For example, OpenAI, the Future of Life Institute, and the Partnership on AI have all invested in AI safety research, and they collaborate with researchers and institutions worldwide.
In summary, while AI safety research may currently represent a smaller proportion of the overall AI research landscape, its importance is increasingly recognized, and the number of publications and funding sources is growing.
There is a growing concern about AI safety as AI becomes more capable. OpenAI was indeed founded in part due to this concern. Is Open Sourcing AI a good idea to mitigate the risk posed by AGI?
Open-sourcing AI can have both benefits and potential risks in terms of AI safety and mitigating the risks posed by AGI (Artificial General Intelligence).
Benefits of open-sourcing AI:
1. Collaboration and knowledge sharing: Open-sourcing AI allows researchers from different organizations and countries to collaborate, share knowledge, and build upon each other's work. This can lead to faster progress in AI safety research and the development of best practices.
2. Transparency: Open-source AI projects can provide greater transparency into the algorithms and methodologies used, allowing the research community to better understand the potential risks and unintended consequences associated with AI systems.
3. Inclusivity: Open-source AI promotes inclusivity by allowing researchers from different backgrounds and with varying levels of resources to contribute to AI development and safety research.
4. Increased scrutiny: Open-source AI projects are subject to review and analysis by a broader community, which can help identify potential safety issues or vulnerabilities more quickly.
However, there are potential risks associated with open-sourcing AI:
1. Malicious use: Open-source AI algorithms and models can be repurposed for malicious purposes, such as creating deepfakes, automating cyberattacks, or developing autonomous weapons systems. This raises concerns about the potential negative consequences of making advanced AI technology openly available.
2. Competitive advantage: Open-sourcing AI may lead to a loss of competitive advantage for companies and research institutions, which could hinder the willingness to share research and resources.
3. Accelerating AGI development: If open-source AI efforts lead to faster progress in AGI development without adequately addressing safety concerns, they could inadvertently increase the risks associated with AGI.
Balancing these factors is crucial in determining the extent to which AI should be open-sourced. OpenAI, for example, has committed to providing public goods to help society navigate the path to AGI, but they also acknowledge that safety and security concerns may reduce their traditional publishing in the future. As AI becomes more capable, it is essential to strike the right balance between open collaboration and the need to prevent misuse and ensure safety in AGI development.
Do we need to rethink privacy regulation as we get closer to AGI? Governments and Civil Society have been concerned about provided encrypted communication channels through messaging apps to bad actors, aren't the stakes higher when providing super intelligence to bad actors?
As we get closer to AGI, it is indeed essential to reconsider privacy regulations and other related policies. The development of AGI has the potential to raise numerous challenges and concerns, including the potential misuse of the technology by bad actors. While encrypted communication channels have been a focus of privacy debates, the stakes can indeed be higher with AGI, as its capabilities could be far more transformative and powerful.
Some considerations for rethinking privacy regulations and policies in the context of AGI include:
1. Balancing privacy and security: Striking the right balance between individual privacy and the need to prevent misuse of AGI by bad actors is crucial. This may involve reevaluating existing privacy regulations and identifying potential gaps or vulnerabilities that could be exploited in the context of AGI.
2. International cooperation: Addressing the potential risks and challenges associated with AGI requires a coordinated international effort. Developing global standards and regulatory frameworks for AGI can help mitigate the risks associated with the technology's misuse and ensure that privacy concerns are adequately addressed.
3. AI ethics and human rights: Ensuring that AGI development respects human rights, including privacy rights, should be a priority. Developing ethical guidelines and best practices for AGI can help ensure that the technology is designed and deployed in ways that respect privacy and other fundamental rights.
4. Monitoring and enforcement: Strengthening monitoring and enforcement mechanisms to prevent the misuse of AGI by bad actors is essential. This may involve investing in new technologies, tools, and capabilities to detect, prevent, and respond to potential threats.
5. Public awareness and engagement: Fostering public awareness and engagement around AGI and its potential implications, including privacy concerns, is crucial. Encouraging open dialogue and collaboration among researchers, policymakers, civil society, and the public can help ensure that privacy regulations and other policies are developed in a transparent and inclusive manner.
In summary, as we get closer to AGI, rethinking privacy regulations and other related policies is essential to address the potential risks and challenges associated with the technology. This includes striking the right balance between privacy and security, fostering international cooperation, and promoting ethical AI development that respects human rights.
I would like to expand on 1. and explore more specifically certains aspects. We are already witnessing instances in which chatGPT (and other generative AI services) refuses to answer questions using proprietary and platform side algorithm. Isn't this a form of intelligence censorship? Is this the only way to ensure the privacy of the prompter at scale whilst avoiding harm?
As AI systems like ChatGPT become more advanced, it is important to strike a balance between providing useful information and ensuring user privacy and safety. Instances where ChatGPT or similar generative AI services refuse to answer certain questions or generate certain outputs can indeed be seen as a form of content moderation or "intelligence censorship." However, this moderation is typically implemented to minimize the potential risks associated with the AI system, such as generating inappropriate, harmful, or misleading content, or violating user privacy.
Finding the right balance between openness and safety is a challenge that AI developers and platforms face. There are several approaches to managing this balance:
1. Moderation and content filtering: AI systems can be designed with built-in moderation mechanisms that prevent the generation of content that violates community guidelines, terms of service, or applicable laws. This can help protect user privacy and ensure that AI-generated content is safe and appropriate. However, over-moderation can lead to legitimate concerns about censorship and free expression.
2. Customizable safety settings: Allowing users to customize the AI system's safety settings can provide more control over the content generated by the AI. This approach enables users to determine the level of moderation and filtering that best suits their needs while still maintaining some level of platform-side control to prevent misuse.
3. Human-AI collaboration: Involving humans in the AI moderation process can help strike a balance between automated content filtering and human judgment. This approach can enable more nuanced decision-making and reduce the risk of over-censorship or false positives.
4. Transparency and accountability: Ensuring transparency in AI moderation algorithms and providing users with a clear understanding of how the AI system works can help address concerns about censorship and build trust in the AI platform. Additionally, providing mechanisms for users to report issues or appeal moderation decisions can further enhance accountability.
5. Ongoing research and collaboration: AI developers and platforms should continually research and collaborate with the wider AI community to improve AI system safety, moderation, and content filtering. Engaging with researchers, policymakers, and civil society can help identify and address emerging risks and challenges associated with AI-generated content and privacy.
While there might not be a one-size-fits-all solution to ensuring privacy and avoiding harm at scale, these approaches can help AI developers and platforms strike a balance between providing useful information and ensuring user privacy and safety.
Expanding on 4., could all LLM be compelled (willingly or through regulation) to openly publish their moderation algorithm. Would there be risk associated with such an approach?
Compelling all large language models (LLMs) to openly publish their moderation algorithms could have both benefits and risks.
Benefits of publishing moderation algorithms:
1. Transparency: Openly publishing moderation algorithms can increase transparency, making it easier for users to understand how the AI system filters content and why certain outputs are generated or withheld. This can help build trust in the AI system and address concerns about potential biases or censorship.
2. Accountability: Requiring LLMs to publish their moderation algorithms can improve accountability, as it allows users, researchers, and regulators to scrutinize the methods used and provide feedback on potential issues, such as biases or privacy concerns.
3. Collaboration and improvement: Openly sharing moderation algorithms can enable collaboration among AI developers, researchers, and platforms to identify best practices, share insights, and improve the safety and effectiveness of content moderation.
However, there are also risks associated with publishing moderation algorithms:
1. Exploitation: Bad actors could analyze the published moderation algorithms to find ways to bypass or exploit the system, potentially leading to the generation and dissemination of harmful or inappropriate content.
2. Competitive concerns: Publishing moderation algorithms may lead to a loss of competitive advantage for AI developers and platforms, as competitors could use the published information to develop similar or more advanced systems.
3. Legal and regulatory constraints: Some aspects of the moderation algorithms might be considered proprietary or subject to intellectual property protections, making it challenging to openly publish the algorithms without violating legal or regulatory requirements.
Overall, the decision to openly publish moderation algorithms should take these benefits and risks into account. It may be more appropriate to publish high-level principles or guidelines governing content moderation rather than the detailed algorithms themselves, in order to balance transparency and accountability with potential risks. Additionally, engaging with users, researchers, policymakers, and other stakeholders in an ongoing dialogue about moderation practices can help address concerns and improve the safety and effectiveness of LLMs.
👩⚕️👨⚕️ Potential podcast candidates
Who are the current living experts on AI safety (including open source AI). Can you also list the experts in establishing international regulatory bodies in the field of biology or nuclear?
AI safety is a growing field, and many experts are actively working on various aspects of the topic, including open-source AI. Some notable living AI safety experts include:
1. Stuart Russell: A computer science professor at the University of California, Berkeley, and the author of the widely-used textbook "Artificial Intelligence: A Modern Approach." Stuart Russell has been a vocal advocate for AI safety research and long-term planning for AI's societal impact.
2. Nick Bostrom: A philosopher and the director of the Future of Humanity Institute at the University of Oxford, Nick Bostrom is well-known for his work on existential risks, particularly those associated with advanced AI. He has written extensively on AI safety and ethics, including his influential book "Superintelligence: Paths, Dangers, Strategies."
3. Eliezer Yudkowsky: A researcher and co-founder of the Machine Intelligence Research Institute (MIRI), Eliezer Yudkowsky has contributed significantly to AI safety research and has written extensively on topics such as friendly AI and rationality.
4. Ilya Sutskever: A co-founder and the chief scientist of OpenAI, Ilya Sutskever has been involved in AI safety research and the development of open-source AI systems like GPT.
5. Victoria Krakovna: A co-founder of the Future of Life Institute, Victoria Krakovna is an AI safety researcher at DeepMind who has worked on various aspects of AI safety, including reinforcement learning and specification gaming.
In the field of international regulatory bodies, particularly in biology and nuclear technology, some key experts include:
1. Mohamed ElBaradei: An Egyptian diplomat who served as the Director General of the International Atomic Energy Agency (IAEA) from 1997 to 2009, Mohamed ElBaradei played a significant role in nuclear non-proliferation and safety efforts. He was awarded the Nobel Peace Prize in 2005.
2. Hans Blix: A Swedish diplomat who served as the head of the IAEA from 1981 to 1997 and later led the United Nations Monitoring, Verification, and Inspection Commission (UNMOVIC), Hans Blix has extensive experience in nuclear regulation and non-proliferation.
3. Jennifer Doudna: A biochemist and molecular biologist, Jennifer Doudna is one of the co-inventors of the CRISPR-Cas9 gene-editing technology. She has advocated for responsible use of this technology and the establishment of international guidelines and regulatory frameworks to govern its application.
4. George Church: A geneticist and molecular engineer, George Church has been a pioneer in synthetic biology and genome engineering. He has actively participated in discussions regarding the ethical, legal, and societal implications of biotechnologies, advocating for responsible innovation and regulation.
5. Paul Berg: A biochemist and Nobel laureate, Paul Berg played a key role in organizing the Asilomar Conference on Recombinant DNA in 1975, which established guidelines for the safe use of recombinant DNA technology and laid the groundwork for modern biotechnology regulation.
This list is not exhaustive, but it provides a snapshot of some of the prominent experts working in AI safety, open-source AI, and international regulatory bodies in biology and nuclear technology.
📚 Further reading
Can you please suggest further reading on these topics?
Certainly! Here is a list of books, articles, and resources that can provide further insights into AI safety, open-source AI, and international regulatory bodies in biology and nuclear technology.
AI Safety and Open-Source AI:
1. "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom: This book discusses the potential risks and challenges associated with the development of artificial general intelligence (AGI) and provides strategies for navigating the path to AGI safely.
2. "Human Compatible: Artificial Intelligence and the Problem of Control" by Stuart Russell: In this book, Russell discusses a new framework for AI development that emphasizes building AI systems that are beneficial to humans and that can be controlled.
3. "Life 3.0: Being Human in the Age of Artificial Intelligence" by Max Tegmark: This book explores the future of AI and its potential impact on society, including discussions on AI safety, ethics, and the need for collaboration in addressing AI-related challenges.
4. OpenAI: The OpenAI website (https://www.openai.com) provides access to research papers, blog posts, and other resources related to open-source AI, AI safety, and AI ethics.
5. The Future of Life Institute: The FLI website (https://futureoflife.org) offers resources on AI safety, including research papers, podcasts, and articles that explore various aspects of AI safety and ethics.
International Regulatory Bodies in Biology and Nuclear Technology:
1. "The Gene: An Intimate History" by Siddhartha Mukherjee: This book provides a comprehensive overview of the history of genetics, including discussions on the ethical and regulatory implications of gene editing technologies like CRISPR.
2. "The Making of the Atomic Bomb" by Richard Rhodes: This Pulitzer Prize-winning book offers an in-depth history of the development of the atomic bomb and the subsequent efforts to control and regulate nuclear technology.
3. "The Seventh Decade: The New Shape of Nuclear Danger" by Jonathan Schell: This book explores the ongoing risks and challenges associated with nuclear weapons in the 21st century and discusses the need for international cooperation to address these dangers.
4. International Atomic Energy Agency (IAEA): The IAEA website (https://www.iaea.org) provides resources and information related to nuclear safety, non-proliferation, and the peaceful uses of nuclear technology.
5. World Health Organization (WHO): The WHO website (https://www.who.int) offers resources on various topics related to public health, including the ethical and regulatory aspects of biotechnology and gene editing.
These resources can provide you with a deeper understanding of AI safety, open-source AI, and the challenges and opportunities associated with international regulatory bodies in biology and nuclear technology.