以人为本的AI未来:战略、人才、文化
With: Seth Dobrin
Date: Aug. 16, 2025
ALEX(00:13):
大家好。今天我们非常高兴邀请到一位正在通过人工智能(AI)改变商业的远见卓识的领导者。塞斯·多布林博士在人工智能、商业转型和负责任创新的交叉领域是一位杰出人物。您拥有超过二十年的开创性工作经验,涵盖基因组学研究、财富500强企业的数字化转型以及风险投资领导。您代表了那些希望利用AI的转型力量,同时保持人类伦理标准和“以人为本”的组织的金标准。多布林博士,您在蓬勃发展的关键AI领域有丰富经验。是什么样的背景让您成为IBM首个人工智能首席官?这个角色即使对一家大型跨国公司来说,也是一个极具创新性的职位。
多布林博士(01:30):
这很有趣,因为我原本是研究人类遗传学的,所以有人会觉得,一个有这种背景的人成为人工智能首席官有点奇怪。然而,我从研究生阶段就开始涉足AI领域。在1990年代末到2000年代初,人们开始应用机器学习和AI来解决遗传学问题,我是其中之一。从1990年代中期到2006年,我在研究机构和初创公司中应用这些工具。2006年,我加入了农业公司孟山都,开始转型财富500强企业。在接下来的十年里,我参与了那里的数据和AI转型,领导了这些工作。在那段时间,我开发了一种转型大型组织的方法,采用以业务为先、以人为本的策略。十年后,我想做点新的事情,加入一家科技公司。我最终来到IBM,协助他们进行内部转型,同时也帮助他们的客户——世界上一些最大的公司——找到如何转型组织的方法。这就是我来到IBM的经历。
ALEX(03:19):
很好。作为AI领域的著名专家,您经常强调AI在组织中的重要性。AI不再是未来的东西;它已经是当今的科技现实,为多个行业的企业提供了巨大的潜力。您在很多场合提到以人为本的AI的重要性。您能详细阐述您的以人为本AI概念吗?我们知道,像斯坦福大学这样的知名机构也强调这一点。是什么情况让您认为需要从技术、业务和治理角度优先考虑以人为本的AI?您认为这种人类价值对齐的可行性如何?
多布林博士(04:36):
以人为本的AI有许多维度。例如,斯坦福大学的人工智能以人为本研究所(由李飞飞创立)从技术、业务和监管角度非常现实地看待这个问题。在我的书《AI智商:以人为本的未来》中,我主要从业务角度讨论这个问题,但也涉及技术方面,与斯坦福的做法类似。当我说以人为本时,我的意思是从一开始就考虑方程中的人类因素。很多时候,AI团队直到最后才考虑人类,这会导致很多问题。几乎总是有一个或多个人类参与AI驱动的决策。如果忽略人类,可能会导致监管后果、伦理问题、偏见或可用性问题。你需要从人类开始。
通常有多种类型的人类参与其中:AI系统的用户和受AI系统影响的人。例如,在信用AI系统中,有作为用户的承保人,以及申请信用的人。这是两个与系统交互的不同人类,你需要以不同方式考虑他们。如果你不从可用性、可解释性和透明度的角度为这两个人类构建AI,你很可能会需要返工。此外,如果不从一开始就考虑人类,你可能会浪费时间开发一个你的组织不会部署的应用。我曾经与一家医疗客户合作,他们的AI系统在开发18个月后被治理委员会否决,因为它不符合他们的价值观。如果他们一开始就采用以人为本的方法,他们早就意识到了这一点。
ALEX(07:48):
这关于以人为本的AI是非常宝贵且关键的见解。您还提到您的新书《AI智商:以人为本的未来——战略、人才和文化》,您在书中提出了将AI和生成式AI融入业务的开创性方法,强调以业务战略为先,而不是以技术为中心的心态。您能介绍这种创新心态的五个主要特征吗?
多布林博士(08:36):
如果你退一步思考,这是一种很合逻辑的心态。当你实施一项技术或进行投资时,它应该与商业价值挂钩。你不应该在没有与组织目标挂钩的情况下进行重大资源投资。多年以前,当组织开始投资机器学习或数据科学时,他们常常在没有明确目标的情况下雇佣团队。许多这样的组织从未从这些团队中获得价值,因为没有明确的战略。这个问题至今仍然存在,即使在生成式AI领域,人们也在没有明确业务目标的情况下实施这些工具。
这种心态的五个关键特征是:
- 业务对齐:确保AI举措与企业目标保持一致。
- 战略规划:制定与组织目标一致的AI实施明确战略。
- 基础设施能力:评估你的组织是否具备支持AI的正确基础设施、架构和数据成熟度。
- 合规与治理:确保AI举措符合监管和伦理标准。
- 文化变革:将AI努力与业务成果挂钩,以推动文化采纳。当组织领导者开始谈论AI如何为组织成功做出贡献时,员工可以看到他们在其中的角色,从而促进文化变革。
ALEX(11:01):
这是对您创新心态主要特征的精彩总结。让我们转向实践方面。您在AI咨询方面有丰富的业务实践和成功案例。这现在是您的主要业务吗?您认为AI战略咨询与传统企业战略咨询的本质区别是什么?我记得几年前一位著名的战略专家,他的公司因为AI和数字化转型带来的剧烈变化而破产。您如何看待利用AI推动社会进步和商业创新的平衡,特别是在考虑伦理和可持续性时?顺便说一句,我是联合国大学的创始成员之一,我们有一个独特的AI全球网络,非常关注与联合国目标一致的伦理和可持续性。
多布林博士(13:04):
我不是IBM咨询组织的一部分,所以我无法直接评论AI咨询与传统企业战略咨询的区别。但我可以谈谈AI对咨询行业的总体影响。我认为咨询行业正被AI,尤其是生成式AI所颠覆,这减少了对传统咨询服务的需求。关于您提到的AI对可持续性的影响,我认为追求越来越大的通用语言模型是没有必要的。企业并不需要这些大型模型;更小、针对特定业务数据训练的专用模型更有效、更私密、成本效益更高。大型模型往往难以实现投资回报,尤其是使用像RAG或微调这样的技术,这些技术会增加成本。
至于伦理,我认为在讨论AI时这是一个有问题的术语。我们应该关注安全性、责任、透明度和信任。伦理是一个滑坡,因为它会引发一个问题:你应用的是谁的伦理?当伦理被嵌入模型中,尤其是大型生成式模型中,它们可能与不同文化或背景不一致。我们需要优先考虑将安全性、责任和透明度嵌入AI系统。
ALEX(16:25):
让我们更多地谈谈您的另一专业领域。行业数据显示,金融和医疗是AI的最大应用领域。您的研究工作结合了分子遗传学、运动学机器人自动化和软件开发,开创了我们现在所认识的现代数据科学。您能介绍AI在这些行业中的背景和应用前景吗?
多布林博士(17:30):
我的工作始于微阵列的早期应用,比如23andMe基因分型用于DNA测试的技术。我帮助开发了核心技术及其分析方法,这涉及大量的机器学习和自动化。后来,从2006年开始在孟山都,我们转变了农业。在此之前,植物育种是手工完成的——育种者会观察植物并选择看起来表现良好的。我们引入了分子标记和基于DNA的决策,利用全基因组数据预测哪些植物会成功。我们以大规模处理每年数亿颗种子。这种转型在孟山都和后来的先正达(一家中国公司)开发,现在已成为农业的标准。
ALEX(19:57):
您在广泛的应用领域有非常深入的参与。近年来,欧盟和中国提出了重要的AI监管和治理标准,如GDPR、CCPA和欧盟AI法案,带来了重大挑战。这些法规的模糊性加剧了复杂性,需要一种精炼的合规方法。您认为这些法规中哪些值得推广,哪些存在结构问题?您认为当前AI治理框架最需要补充哪些原则?
多布林博士(21:21):
欧盟AI法案在最后阶段时是以结果为导向的,将很有前景。然而,ChatGPT刚发布后,欧盟反应过度,使法案过于技术化且模糊。即使有了最近的指导意见,仍然不清楚组织应该做什么。原来的设计更好,因为它关注结果而不是技术,这会使其更具未来适应性。AI发展很快,法规天生滞后,如果法规不是以结果为导向,它们永远赶不上。原来的法案设计会更具相关性,但现在它过于关注技术,而不是人类。如果它保持以人为本,关注实际的人类输出结果,而不是技术,它会更具未来适应性。现在它已经过时了,这是一个大问题。
未来制定法规时,我们需要关注如何保护人类。例如,防止深伪造(deepfake)。法规应要求AI生成的内容可识别且抗操纵,而不指定技术细节。银行法规,如反洗钱规则,就是这样运作的:它们指定结果,而不是方法。此种方法更具适应性。
ALEX(21:38):
您认为欧盟AI法案是否也是为了保护本地产业,限制美国AI巨头的竞争?
多布林博士(22:01):
欧盟AI法案的目标可能是复制GDPR的成功,GDPR因其高标准成为全球标准。跨国公司普遍采用它。欧盟希望AI法案也能如此,但在ChatGPT发布后,他们反应过度,使其过于技术化导致过时。以人为本、以结果为导向的方法会更有效。
ALEX(25:49):
在银行业,大多数数据是结构化的,便于用于防止深伪造等问题。但AI常常处理非结构化甚至错误的数据。您如何应对这种复杂性?
多布林博士(26:46):
以“深度伪造”为例。我正在启动一家世界模型公司,创建近乎完美的世界模拟。与大语言模型导致的深度伪造不同,世界模型的深度伪造因物理学不完美可被识别,而且我们的模型嵌入水印,包含公司名称、创建者和IP地址等信息。法规应要求AI生成内容包含此类标识,且抗操纵,而不限定技术。这类似于网络安全法规,处理各种数据类型(图像、视频、文本),关注结果而非方法。
ALEX(28:31):
谷歌最近宣布了一个新的世界模型,展示了一种处理复杂性和风险的积极未来。最后一个问题:许多人担心AI会取代人类。然而,鉴于当前AI仍是概率性和统计性的,远非真正的智能,我们可能更应关注特定行业问题,而不是哲学问题。作为2024年DataIQ最具影响力人物之一和AI for Good冠军的行业领导者,您认为未来4到6年内,哪些AI趋势和挑战将重新定义行业?
多布林博士(30:26):
4到6年太长了;变革发生得更快。我同意今天的AI不具备智能——它无法像人类一样推理。但它不需要像人类一样智能就能取代工作。我们已经看到数万名软件开发者被裁员,因为公司不再需要他们。初创公司现在只需1到5人的团队就能完成种子轮,这是前所未有的。我预测三年内全球软件开发劳动力将减少80%。咨询公司也停止了招聘实习生和初级专业人员,因为生成式AI正在颠覆他们的业务。
我们正处于一场工业革命中,不同于过去45到80年的革命,这场革命将在5到10年内发生。这将影响50多岁和60岁出头的人。社会需要解决如何过渡工人。如果企业减少劳动力以提高利润率,但消费者买不起产品,利润率会受损。我们需要鼓励创业,建立支持减少劳动力的社会基础设施。这是一个全球趋势——在美国、中国、新加坡、欧洲、印度和中东都在发生。企业还必须在裁员时支持员工。
ALEX(34:59):
完全同意。无论在任何技术革命中,创业精神始终至关重要。非常感谢您今天的精彩见解和分享。
多布林博士(35:30):
感谢邀请我。我很感激。
Interview Transcript: Dr. Seth Dobrin
ALEX (00:13):
Hello, everyone. Today, we are very glad to invite a visionary leader transforming business through AI. Dr. Seth Dobrin stands as a prominent figure at the intersection of artificial intelligence, business transformation, and responsible innovation. With over two decades of pioneering work spanning genomics research, Fortune 500 digital transformation, and venture capital leadership, you represent the gold standard for organizations seeking to harness AI’s transformative power while maintaining ethical standards and human-centered approaches. Dr. Dobrin, you have extensive experience in the booming and critical field of AI. What background led you to become IBM’s first-ever AI Chief Officer? This role seems like a highly innovative position, even for a major multinational corporation.
Dr.Dobrin (01:30):
It’s interesting because I’m actually a human geneticist by training, so people find it a bit odd that someone with that background would wind up as a Chief AI Officer. However, I’ve been in the field of AI since graduate school. In the late 1990s and early 2000s, people started applying machine learning and AI to solve genetics problems, and I was one of those people. Between the mid-1990s and 2006, I applied these tools in research institutes and startups. In 2006, I joined Monsanto, an agricultural company, and began transforming Fortune 500 companies. Over the course of 10 years, I was involved in leading data and AI transformation there. During that time, I developed a methodology to transform large organizations using a business-first, human-centered approach. After 10 years, I wanted to do something new and move to a tech company. I ended up at IBM, where I worked to transform their operations and also helped their customers—some of the biggest companies in the world—figure out how to transform their organizations. That’s how I wound up at IBM.
ALEX (03:19):
Very good. As a renowned expert in AI, you often emphasize the importance of AI within organizations. AI is no longer futuristic; it’s a present-day technological reality offering vast potential for businesses across multiple sectors. You’ve mentioned the importance of human-focused AI in many contexts. Can you elaborate on your concept of human-focused AI? We know that prominent organizations like Stanford University have also emphasized this. What circumstances led you to believe that human-focused AI needs to be prioritized from technical, business, and governance perspectives? How feasible do you think this human value alignment is?
Dr.Dobrin (04:36):
There are many dimensions to human-focused AI. For example, Fei-Fei Li at Stanford’s Human-Centered AI Institute looks at it realistically from technical, business, and regulatory perspectives. In my book, AImpact: A Human-Focused Future, I discuss it primarily from a business perspective, but also touch on technical aspects, much like Stanford’s approach. When I talk about human-focused AI, I mean considering the human in the equation from the start. Often, AI teams don’t consider humans until the end, which leads to problems. Almost always, there’s a human or multiple humans involved in the decisions driven by AI. Ignoring them can lead to regulatory consequences, ethical issues, bias, or usability problems. You need to start with the human.
There are frequently multiple types of humans involved: users of the AI system and those impacted by it. For example, in a credit AI system, you have the underwriter (the user) and the person applying for credit (the impacted individual). These are two different humans interacting with the system, and you need to consider them differently. If you don’t build AI with both in mind—considering usability, explainability, and transparency—you’ll likely need to rework the system. Additionally, if you don’t consider humans upfront, you may waste time building an application that your organization won’t deploy. I once worked with a healthcare client whose AI system was rejected by their governance board after 18 months of development because it didn’t align with their values. Had they started with a human-focused approach, they would have realized this earlier.
ALEX (07:48):
That’s very valuable and critical insight about human-focused AI. You also mentioned your new book, AImpact: A Human-Focused Future—Strategy, Talent, and Culture, where you offer a groundbreaking approach to integrating AI and generative AI into business, emphasizing a business-strategy-first mindset rather than a technology-centric one. Could you introduce five main characteristics of this innovative mindset?
Dr.Dobrin (08:36):
If you step back, it’s a logical mindset. When implementing a technology or making an investment, it should be tied to business value. You shouldn’t make significant investments without connecting them to organizational objectives. Years ago, when organizations started investing in machine learning or data science, they often hired teams without clear objectives. Many of these organizations never saw value from those teams because there was no clear strategy. This issue persists today, even with generative AI, where tools are implemented without clear business objectives.
The five key characteristics of this mindset are:
- Business Alignment: Ensure AI initiatives are tied to corporate objectives.
- Strategic Planning: Develop a clear strategy for AI implementation that aligns with organizational goals.
- Infrastructure Capacity: Assess whether your organization has the right infrastructure, architecture, and data maturity to support AI.
- Compliance and Governance: Ensure AI initiatives meet regulatory and ethical standards.
- Cultural Change: Tie AI efforts to business outcomes to drive cultural adoption. When leaders communicate how AI contributes to organizational success, employees can see their role in that success, fostering cultural change.
ALEX (11:01):
That’s a wonderful summary of the main characteristics of your innovative mindset. Let’s turn to the practical side. You have a wealth of business practice and successful case studies in AI consulting. Is that your main business now? What do you think is the essential difference between AI strategic consulting and traditional corporate strategy consulting? I recall a famous strategy expert whose company went bankrupt a few years ago due to the dramatic changes brought by AI and digital transformation. How do you view the balance between using AI to drive social progress and business innovation, especially considering ethics and sustainability? By the way, I’m a founding member of United Nations University, which has a unique AI global network, and we pay significant attention to ethics and sustainability in line with UN goals.
Dr.Dobrin (13:04):
I wasn’t part of IBM’s consulting organization, so I can’t directly comment on the differences between AI consulting and traditional corporate strategy consulting. However, I can discuss the impact of AI on consulting in general. I believe consulting is being disrupted by AI, particularly generative AI, which is reducing the need for traditional consulting services. Regarding your point about AI’s impact on sustainability, I think the race for larger general-purpose language models is unnecessary. Businesses don’t need these massive models; smaller, purpose-built models trained on specific business data are more effective, private, and cost-efficient. Large models often make it harder to achieve a return on investment, especially with techniques like RAG or fine-tuning, which increase costs.
As for ethics, I believe it’s a problematic term when discussing AI. Instead, we should focus on safety, responsibility, transparency, and trust. Ethics is a slippery slope because it raises questions about whose ethics are being applied. When ethics are embedded in a model, especially large generative models, they may not align with other cultures or contexts. We need to prioritize embedding safety, responsibility, and transparency into AI systems.
ALEX (16:25):
Let’s talk more about your specific expertise. Industry data shows that finance and healthcare are the largest application areas for AI. Your research work combined molecular genetics, kinematic robotic automation, and software development, pioneering what we now recognize as modern data science. Can you introduce the background and application perspective of AI in these industries?
Dr.Dobrin (17:30):
My work began with early applications of microarrays, like those used in 23andMe Genotype for DNA testing. I helped develop the core technology and analysis methods, which involved significant machine learning and automation. Later, at Monsanto starting in 2006, we transformed agriculture. Before then, plant breeding was done manually—breeders would visually select plants. We introduced molecular markers and DNA-based decision-making, predicting which plants would succeed using whole-genome data. This scaled to processing hundreds of millions of seeds annually, with massive automation in labs and fields. This transformation, developed at Monsanto and later Syngenta (a Chinese company), is now standard in agriculture.
ALEX (19:57):
You’ve been profoundly involved in a wide range of applications. In recent years, important AI regulations and governance standards have been proposed in the EU and China, such as GDPR, CCPA, and the EU AI Act, posing significant challenges. The complexity is magnified by the ambiguity surrounding these regulations, demanding a refined approach to compliance. Which of these do you think are worth promoting, and which have structural issues? What principles do you believe are most needed to supplement current AI governance frameworks?
Dr.Dobrin (21:21):
When the EU AI Act was in its final stages, it was outcome-based, which was promising. However, after ChatGPT’s release, the EU overreacted, making the Act overly technical and ambiguous. It’s now unclear what organizations are supposed to do, even with recent guidance. The original design was better because it focused on outcomes rather than technology, which would have made it more future-proof. Regulations lag behind AI’s rapid evolution, so they need to focus on protecting humans—against deepfakes, for example—without specifying technical details. For instance, regulations should require that AI-generated content be identifiable and resistant to manipulation, without dictating how. Banking regulations, like anti-money laundering rules, work this way: they specify outcomes, not methods. This approach is more adaptable.
ALEX (21:38):
Do you think the EU AI Act is also an attempt to protect local industries and limit competition from U.S. AI giants?
Dr.Dobrin (22:01):
The EU’s goal with the AI Act was likely to replicate GDPR’s success, which became a global standard because it was the highest bar. Multinationals adopted it universally. The EU hoped to do the same with the AI Act, but its overreaction to ChatGPT made it overly technical and outdated. A human-focused, outcome-based approach would have been more effective.
ALEX (25:49):
In the banking industry, most data is structured, making it easier to use for preventing issues like deepfakes. But AI often deals with unstructured or even erroneous data. How do you address this complexity?
Dr.Dobrin (26:46):
Take deepfakes as an example. I’m starting a world model company that creates near-perfect simulations of the world (Deepfake Detection). Unlike deepfakes by LLM, where imperfect physics can reveal them, our models embed watermarks with details like the company name, creator, and IP address. Regulations should require such identifiers in AI-generated content, resistant to manipulation, without specifying the technology. This is similar to cybersecurity regulations, which handle diverse data types (images, videos, text) and focus on outcomes rather than methods.
ALEX (28:31):
Google recently announced a new world model, showing a positive future for handling complexity and risk. For our final question: many worry about AI replacing humans. However, given that current AI is still probabilistic and statistical, far from true intelligence, we may be more concerned with specific industry issues than philosophical questions. As an industry leader named one of DataIQ’s 2024 Most Influential People and an AI for Good Champion, what AI trends and challenges do you believe will redefine industries over the next 4 to 6 years?
Dr.Dobrin (30:26):
Four to six years is too long; the transformation is happening faster. I agree that AI today isn’t intelligent—it can’t reason like humans. But it doesn’t need to be human-like to replace jobs. We’re already seeing tens of thousands of software developers laid off because companies don’t need them. Startups are now launching with one- to five-person teams, which was unheard of before. In three years, I predict an 80% reduction in the software development workforce globally. Consulting firms have also stopped hiring interns and early professionals because generative AI is disrupting their business.
We’re in the middle of an industrial revolution that will unfold in 5 to 10 years, unlike past revolutions that took 45 to 80 years. This will impact even those in their late 50s and early 60s. Society needs to address how to transition workers. If businesses reduce workforces to increase margins but consumers can’t afford their products, profitability will suffer. We need to encourage entrepreneurship and build social infrastructure to support reduced workforces. This trend is global—happening in the U.S., China, Singapore, Europe, India, and the Middle East. Corporations must also support employees during layoffs.
ALEX (34:59):
I completely agree. Entrepreneurship remains critical in any technological revolution. Thank you so much for your wonderful insights and sharing.
Dr.Dobrin (35:30):
Thank you for having me. I appreciate it.