AI chatbots now reach 2 billion people monthly. But whose values are we adopting ?
The Expanding Reach of AI Chatbots: A Reflection on Values and Responsibility
By 2025, artificial intelligence-powered chatbots have become a cornerstone of daily life for a vast global populace. Currently, over two billion individuals engage with AI platforms such as ChatGPT, Google’s Gemini, Meta’s AI assistants in Facebook, Instagram, and WhatsApp, as well as other emerging systems like Claude, Perplexity, xAI’s Grok, and DeepSeek. This extraordinary proliferation underscores the profound influence AI now wields in shaping access to information, advice, and decision-making processes worldwide.
The Distribution of Users and Platforms
Breaking down these figures reveals a diverse landscape: approximately half of the monthly active users (MAUs)—around one billion people—interact with open-source AI models, notably Meta’s LLaMA family. The remaining half relies on proprietary systems, including ChatGPT and Google’s Gemini. For instance, Meta reports approximately one billion users engaging with its AI assistants across major social media platforms, while ChatGPT alone commands between 500 and 600 million MAUs, with platforms like xAI and Perplexity rounding out the global tally.
Behind these staggering numbers are a relatively small workforce—roughly 85,000 professionals worldwide—primarily based in the United States, especially California. These individuals are responsible for developing, maintaining, and operating the AI systems that touch hundreds of millions, if not billions, of lives each month. This concentrated group, smaller than many major corporations, wields an outsized influence on how knowledge is accessed and how personal advice or decisions are shaped.
Core Ethical and Societal Questions
Take ChatGPT as a leading example: with its extensive user base, nearly half of whom are between ages 18 and 35, the platform has transcended its initial productivity and learning functions to become a source of relationship guidance, legal counseling, and even medical advice. OpenAI emphasizes its mission as ensuring that “artificial general intelligence benefits all of humanity,” yet this lofty goal raises critical questions about who shapes the underlying values embedded in these systems.
The small cohort of developers, AI researchers, and corporate leaders—operating within relatively homogeneous cultural and moral frameworks—bear significant responsibility. Their personal beliefs, biases, and priorities inevitably influence the design and deployment of these tools. But can such a limited group genuinely encapsulate the full spectrum of humanity’s diverse moral, cultural, and philosophical values? Or does the reliance on a few, potentially insular



Post Comment