Eurokd
European KnowledgeDevelopment Institute

Perspective Article

Cognitive Machines and Cultural Shifts: How AI Redefines Organizational Development and Consulting Practice

International Journal of Behavior Studies in Organizations, Volume 14, Pages 50-60, https://doi.org/10.32038/jbso.2025.14.04

This paper examines the transformative impact of Artificial İntelligence (AI) on individual cognition, organizational culture, and the evolving dynamics of trust within hybrid human–machine systems. Building on recent research, it argues that AI reshapes not only how individuals think and make decisions but also how organizations define collaboration, ethics, and cultural norms. Drawing on insights from Aakula et al. (2024), Pawar and Shah (2024), Russell et al. (2025), Choi (2025), and Erengin et al. (2024), the paper positions AI as both a cognitive and relational actor that reconfigures the foundations of Organizational Development (OD) and consulting practice. Through a synthesis of literature and conceptual analysis, it highlights how AI-driven transformation calls for new approaches to leadership, change management, and ethical governance—especially concerning trust, mediation, and human–AI collaboration.

Cognitive Machines and Cultural Shifts: How AI Redefines Organizational Development and Consulting Practice

Joshua Scoggin

Global Academy of Finance and Management, Colorado, USA

ABSTRACT:

This paper examines the transformative impact of Artificial İntelligence (AI) on individual cognition, organizational culture, and the evolving dynamics of trust within hybrid human–machine systems. Building on recent research, it argues that AI reshapes not only how individuals think and make decisions but also how organizations define collaboration, ethics, and cultural norms. Drawing on insights from Aakula et al. (2024), Pawar and Shah (2024), Russell et al. (2025), Choi (2025), and Erengin et al. (2024), the paper positions AI as both a cognitive and relational actor that reconfigures the foundations of Organizational Development (OD) and consulting practice. Through a synthesis of literature and conceptual analysis, it highlights how AI-driven transformation calls for new approaches to leadership, change management, and ethical governance—especially concerning trust, mediation, and human–AI collaboration.

KEYWORDS:  Organizational Development, Artificial Intelligence, Cultural Shifts, Social Presence Theory (SPT)

Artificial İntelligence (AI) has become a defining force in how people think, collaborate, and work. Beyond automation, AI systems now influence organizational identity, culture, and collective learning. Aakula et al. (2024) note that AI-enabled digital transformation is shifting organizations from hierarchical to data-driven and adaptive systems. This shift goes beyond efficiency—it entails a deeper cultural realignment toward agility, transparency, and continuous learning. Similarly, Pawar and Shah (2024) show that AI integration has redefined communication, collaboration, and values, fostering innovation-oriented cultures. Russell et al. (2025) add that AI’s cognitive effects vary by task type: it enhances performance in structured tasks but increases cognitive strain in subjective or reflective work. In this evolving context, Organizational Development (OD) and consulting practices must adapt to a world in which human and machine cognition co-evolve.

AI does not merely automate managerial processes; it co-evolves with human intelligence, forming a synergistic partnership that reshapes how organizations plan, decide, and lead (Islami & Mulolli, 2024). This relationship represents augmentation rather than substitution—an expansion of human capabilities where cognitive, emotional, and creative intelligences intersect. As Islami and Mulolli (2024) argue, managerial effectiveness increasingly depends on balancing automation with augmentation—using AI not as a replacement for human agency but as a cognitive collaborator that enhances it. From this perspective, Organizational Development (OD) becomes the orchestration of symbiotic intelligence systems—human and machine—driving collective learning and cultural renewal.

Recent problematizing reviews have challenged traditional organizational and management theories of AI. Ramaul et al. (2025) argue that prevailing discourses often rely on two assumptions: rationality—seeing AI as a bias-free, hyper-logical agent—and anthropomorphism—treating AI as a humanlike actor capable of learning, agency, and creativity. While these metaphors are useful, they risk oversimplifying the socio-material entanglement of humans, algorithms, and data. By contrast, this paper adopts a symbiotic perspective, viewing AI not as a purely rational machine or quasi-human actor but as a relational partner in organizational sensemaking (Ramaul et al., 2025).

     Recent theoretical work also emphasizes AI’s influence on meaning-making within organizations. Yaroğlu (2024), drawing on the hermeneutic cycle, argues that organizational culture evolves as members continuously interpret and reinterpret AI’s presence and outputs—creating an ongoing feedback loop between human understanding and technological mediation.

Machines as Cultural Agents

Recent scholarship has expanded the understanding of AI by exploring how it promotes adaptive learning across digital ecosystems. Munir et al. (2022) show that machine learning algorithms personalize instruction, predict engagement, and enhance collective learning outcomes in digital education. These same mechanisms apply to organizational contexts, where AI facilitates knowledge diffusion, collaboration, and performance optimization. Thus, AI not only mediates meaning but also operationalizes distributed learning among human and technological actors.

     Insights from educational and collaborative contexts further enrich this view. Law et al. (2025) examined how Generative AI (GAI) supports interdisciplinary teamwork and problem-solving in authentic, real-world settings. Their findings reveal that GAI functions not merely as a computational tool but as a co-creative partner that extends human cognition and supports joint knowledge construction. In teams with strong digital competence and shared values, GAI mediated communication, creativity, and innovation—highlighting the relational and cultural dimensions of human–AI collaboration. These insights align with this paper’s framing of AI as a cultural agent that both shapes and is shaped by collective learning processes (Law et al., 2025).

     AI systems increasingly act as cultural participants within organizations. Through natural language processing, predictive analytics, and emotional AI, machines now influence tone, decision-making, and psychological safety. Vicci (2024) explains that emotionally intelligent AI can perceive human affect, facilitating empathetic communication and improving engagement. Russell et al. (2025) emphasize that AI’s cognitive effects are task-dependent: while it reduces workload in objective contexts, it can heighten strain during subjective or interpretive work. These findings suggest that AI enhances procedural precision yet struggles with meaning-making and empathy—the very foundations of organizational culture. Consequently, AI functions as both a technological and cultural entity, mediating behavior, ethics, and identity across organizational levels.

Yaroğlu (2024), working within a hermeneutic framework, contends that AI systems engage in organizational meaning-making without possessing subjective or phenomenological experience. While AI contributes through data analysis and pattern recognition, genuine understanding and sensemaking remain human responsibilities. Machines can therefore mediate organizational meaning but cannot generate it independently.

Building on this interpretive stance, Hoßbach and Isaksen (2025) advance a metacognitive model of AI-augmented creative problem-solving, showing how generative AI shifts human cognition from execution toward monitoring and reflective control. In this model, creativity becomes a shared process in which humans oversee, adapt, and refine AI-generated ideas. This metacognitive reframing positions AI not merely as a creative collaborator but as a catalyst for meta-learning—transforming how individuals think about thinking and redefining cycles of collective innovation (Hoßbach & Isaksen, 2025).         

Beyond cognition and emotion, AI also functions as a creative collaborator. Grilli and Pedota (2023) demonstrate that AI enhances creativity across individual, group, and organizational levels by expanding informational boundaries, supporting divergent and convergent thinking, and aiding idea generation. Their multilevel model shows that AI systems reduce bounded rationality and enable new forms of creative synthesis—an essential feature of knowledge-based organizations. Thus, AI not only mediates cultural meaning (Yaroğlu, 2024) but also participates in creative enactment, co-producing insights once considered uniquely human.

     Grange et al. (2025) advance the idea of the Human–GenAI Value Loop, a framework that illustrates how generative AI can drive human-centered innovation through iterative cycles of co-creation and reflection. Grounded in design sprint methodology, their research shows that GenAI enhances collaborative intelligence by complementing human divergent thinking with AI-driven convergence—clarifying and organizing ideas within innovation teams. This dynamic demonstrates how AI operationalizes cognitive augmentation as an adaptive learning mechanism within organizations, reinforcing its role as both a cultural and cognitive collaborator.

In parallel, Eicke et al. (2025) highlight the importance of strategic AI orientation—the extent to which managerial focus and strategic intent align around AI technologies. Their mixed-methods analysis of S&P 500 firms found that organizations with a strong AI orientation achieved higher technological innovation and cross-domain learning. This finding underscores leadership’s critical role in fostering an intentional AI strategy that integrates governance, capability development, and ethical alignment. From an Organizational Development (OD) standpoint, strategic AI orientation translates the abstract idea of human–AI symbiosis into concrete managerial practice, linking cognitive collaboration with innovation outcomes (Eicke et al., 2025).

Transforming Organizational Culture

Insights from Munir et al. (2022) and Stelmaszak et al. (2025) together reveal that AI-driven transformation operates simultaneously on cognitive and relational levels. Munir et al. (2022) highlight how machine learning fosters individualized and collective learning, while Stelmaszak et al. (2025) emphasize AI’s role as a relational, organizing capability that redefines how human and algorithmic actors co-create meaning and structure. Taken together, these findings bridge empirical and conceptual perspectives, demonstrating that AI-induced cultural change emerges through co-evolutionary learning and dynamic relational systems.

Evidence from organizational practice reinforces this trajectory. Rožman et al. (2023) found that AI-supported leadership and training initiatives significantly enhance employee motivation, collaboration, and overall performance. Their study shows that cultivating AI literacy and adaptive learning strengthens psychological engagement, agility, and resilience within teams. For Organizational Development (OD) practitioners, this means addressing not only the cognitive and ethical implications of AI but also the emotional and developmental dimensions that sustain engagement in hybrid human–machine ecosystems. Effective AI-driven transformation, therefore, must be guided by leadership that promotes inclusive learning and data-informed empowerment across the workforce.

Comparable patterns can be observed in educational contexts. Chang (2025) demonstrates that students using generative AI tools such as ChatGPT and Copilot show improved motivation, collaboration, and autonomous learning. These outcomes parallel organizational dynamics in which generative AI enhances teamwork, feedback, and collective problem-solving. Chang’s findings highlight AI’s dual role as cognitive infrastructure and cultural catalyst—reshaping how humans learn, cooperate, and innovate together.        

 Pawar and Shah (2024) emphasize that AI integration reshapes the “cultural DNA” of organizations by fostering openness, innovation, and accountability. In a related vein, Aakula et al. (2024) argue that AI decentralizes authority, distributing intelligence across systems and teams. Yet, as Russell et al. (2025) caution, not all processes are equally suited for AI augmentation. Tasks involving ethical deliberation, interpretation, or interpersonal negotiation may experience limited benefits—or even cognitive overload—when automated. Thus, a balanced organizational culture must leverage AI for operational efficiency while preserving human judgment, empathy, and narrative coherence.

Yaroğlu (2024) extends this discussion by showing how transformation unfolds cyclically: AI reshapes organizational norms and practices, while employees reinterpret these changes through a hermeneutic cycle. This ongoing interpretive process redefines shared understanding and reinforces culture as a living, adaptive system rather than a static construct.

Further synthesis emerges when Islami and Mulolli’s (2024) framework of human–AI symbiosis is combined with Grilli and Pedota’s (2023) multilevel creativity model. Together, they suggest that cultural transformation evolves through intertwined processes of augmentation and creation. The symbiotic interaction between human and artificial intelligence enables adaptive learning cycles in which analytical precision and creative exploration reinforce each other. Organizations that cultivate this hybrid intelligence—anchored in empathy, ethics, and innovation—develop greater absorptive capacity (Grilli & Pedota, 2023), allowing them to assimilate and apply new knowledge dynamically. The cultural challenge is therefore not merely adopting AI but interpreting it—translating machine-generated insights into shared organizational meaning (Yaroğlu, 2024).          

Van Giffen et al. (2025) document Siemens’ five-year effort to integrate AI into lean quality management, revealing tensions between human-driven and AI-driven processes—control versus discovery, transparency versus opacity. Their findings show that AI adoption challenges deeply ingrained values of predictability and oversight, requiring intentional cultural mediation and leadership engagement. This underscores the need for culture to evolve alongside technological sophistication, balancing efficiency with interpretability and human agency.

Smith et al. (2024) apply signaling theory to human–AI collaboration, demonstrating that successful teamwork depends on the alignment of signals between human and AI participants and on the autonomy employees have in seeking AI input. When employees voluntarily consult AI—rather than being required to follow its recommendations—they show greater trust and collaboration. This supports viewing AI not as an authority but as a partner in organizational sensemaking and decision-making. In OD practice, this suggests that consultants and leaders should design AI-augmented systems that preserve human agency while enhancing collective competence.

Humberd and Latham (2025) use an agency theory lens to propose that AI can act as an “agent of the firm.” As AI gains autonomy and decision rights, it introduces new challenges of alignment, oversight, and ethical governance. Their agentic AI model reframes managers as stewards of relationships with both human and nonhuman agents, responsible for embedding accountability and value alignment into AI systems.        

Taken together with these perspectives, Braojos et al. (2024) highlight that digital transformation capabilities influence not only structure and strategy but also employee commitment and motivation. Their study underscores the mediating role of digital leadership and continuous learning environments in fostering commitment. Collectively, these findings suggest that organizations thriving in AI-driven transformation cultivate cultures of learning, empowerment, and inclusion—conditions that sustain long-term adaptability and engagement.

Trust Formation in Human–AI Systems

Siemon et al. (2025) deepen the analysis of human–AI collaboration by examining how social presence and perceived agency influence trust, engagement, and motivation in hybrid teams. Drawing on Social Presence Theory (SPT) and the Theory of Planned Behavior (TPB), they find that employees’ willingness to rely on AI partners depends not only on task performance but also on perceived authenticity, familiarity, and social connectedness. These findings emphasize that organizational trust in AI is not merely technical—it is a socio-relational process shaped by perception, interaction, and shared purpose. Designing AI systems and consulting practices that foster presence, empathy, and collaborative intent across digital and physical boundaries is therefore essential for sustainable human–AI partnerships.

Choi (2025) highlights the central role of trust in AI adoption, showing that clients’ acceptance of AI-assisted mediation depends on performance expectancy, perceived usefulness, and confidence in the ethical use of AI. Using the Unified Theory of Acceptance and Use of Technology (UTAUT), Choi demonstrates that behavioral intentions toward AI adoption are mediated by trust, highlighting the importance of technological integration that aligns with principles of ethical and procedural justice. Similarly, within organizations, employees’ willingness to engage with AI depends not solely on efficiency but on psychological safety and perceived fairness.

Erengin et al. (2024) add a social dimension to this discussion by introducing the concept of third-party trust transfer to explain how trust develops in AI-augmented teams. Drawing on Social Cognitive Theory (Bandura, 1986), they show that employees’ trust in AI is influenced by the behavior and perceived trustworthiness of their human teammates. This triadic trust model—linking the human trustor, the AI trustee, and the surrounding social context—illustrates how cultural norms and interpersonal observation shape trust formation in hybrid work environments. Their research suggests that organizations must foster cultures where employees regularly observe positive, cooperative human–AI collaboration, normalizing AI as a trustworthy teammate rather than a detached instrument.

Viewed collectively, these perspectives highlight that Organizational Development (OD) must approach AI adoption as a relational and socio-cognitive process. Consultants and leaders should design systems that not only optimize AI performance but also nurture mutual trust, ethical reflection, and human agency. In this sense, trust formation becomes both a psychological and cultural bridge—enabling organizations to navigate AI transformation with integrity, empathy, and shared understanding.

Reframing Managerial Roles

Stelmaszak et al. (2025) argue that AI should be viewed not as an autonomous entity but as an organizing capability that emerges through interactions among human and algorithmic actors. This relational framing redefines leadership and consulting as co-creative practices that generate organizational intelligence through connection, interdependence, and emergence—key attributes of AI-driven collaboration.

In this vein, Grange et al. (2025) present the Human–GenAI Value Loop, which demonstrates that sustainable innovation arises when human creativity and machine intelligence are continually interwoven through cycles of trust, experimentation, and interpretation. Embedding these value loops into Organizational Development (OD) practice allows leaders to cultivate environments that balance automation with autonomy—positioning AI as a facilitator of co-creation rather than a substitute for human insight.         

Taken together, these developments signal a shift in managerial roles from directive leadership to facilitative, emotionally intelligent engagement. Vicci (2024) observes that emotional intelligence—long regarded as a uniquely human capacity—can now be augmented by AI tools capable of detecting stress and mood patterns. Russell et al. (2025) similarly note that AI’s cognitive benefits vary depending on task subjectivity. Managers who recognize these nuances can design workflows that delegate analytical tasks to AI while reserving creative and interpretive work for human teams. This division of cognitive labor reinforces emotional intelligence as a core managerial competency in hybrid organizations, ensuring that empathy and ethics remain central to leadership practice.

Building on Erengin et al.’s (2024) emphasis on trust as a socially mediated process, Yaroğlu (2024) highlights the manager’s role as a cultural interpreter. Leaders mediate between human meaning systems and AI-driven analytics, ensuring that data-informed decisions remain aligned with organizational values and interpretive traditions. Modern leadership thus involves navigating the interface between artificial and human intelligence across all management functions—planning, organizing, leading, and controlling.

Echoing Choi’s (2025) focus on ethical accountability in AI adoption, Islami and Mulolli (2024) argue that effective managers treat AI as a symbiotic partner that extends human cognition while maintaining ethical responsibility. In practice, this means designing workflows in which automation handles structured information and human judgment governs the contextual and creative dimensions. Collectively, these insights portray managers evolving from directive authorities into curators of hybrid intelligence, mediating between algorithmic precision and cultural empathy. Leadership becomes interpretive stewardship—translating AI-driven insights into meaning-aligned, value-consistent organizational decisions.

Consultants in the Age of AI

Organizational Development (OD) consultants are navigating a profound paradigm shift as AI becomes both a technological and socio-cultural force. According to Aakula et al. (2024), successful digital transformation requires consultants to integrate strategy, ethics, and data governance into a unified framework. Pawar and Shah (2024) further emphasize the importance of cultivating organizational cultures that embrace innovation, inclusion, and accountability. In line with Russell et al. (2025), consultants must also evaluate AI’s applicability through the lens of task complexity and subjectivity. While AI can streamline diagnostic and analytical processes, it remains limited in guiding reflection, ethics, or organizational storytelling. Consequently, OD practitioners must combine algorithmic insight with human empathy and ethical discernment.

     Building on Yaroğlu’s (2024) hermeneutic model, consultants should approach AI integration as an interpretive and meaning-making process rather than a purely technical implementation. Their role is to help organizations translate algorithmic insights into narratives that align with their culture and values. This human-centered approach resonates with Braojos et al. (2024), who demonstrate that effective digital transformation depends on leadership and learning mechanisms that sustain commitment. For consultants, this translates into designing interventions that go beyond technology deployment toward fostering continuous learning environments.

     In practice, this means consultants play a crucial role in facilitating digital literacy, inclusivity, and engagement across organizational hierarchies. They are responsible for embedding transformation as a participatory, developmental process rather than a top-down initiative. The most effective consultants will, therefore, act not as implementers of technology but as interpreters and facilitators of organizational understanding—helping leaders and teams adapt ethically and creatively within AI-augmented systems.

Conflict Management and Team Building

AI technologies are increasingly enhancing conflict management by detecting sentiment patterns and predicting potential interpersonal friction. Vicci (2024) highlights that emotionally aware AI systems can help leaders and consultants address stress, burnout, and communication breakdowns proactively. By leveraging sentiment analysis and team analytics, organizations can increase transparency and responsiveness. However, human oversight remains indispensable. The goal is not to replace empathy with algorithms but to amplify it. When used ethically, AI tools can reinforce psychological safety, inclusivity, and team performance.

     Integrating insights from Humberd and Latham (2025) and Van Giffen et al. (2025) shows that the relationship between AI and organizations is both structural and cultural. As AI assumes greater autonomy in decision-making, governance systems must ensure transparency, ethical integrity, and alignment with organizational values. Siemens’ five-year AI integration initiative, for example, revealed that technological innovation must be reconciled with cultural values that sustain trust and engagement. Successful adoption depends not merely on technical capability but on deliberate cultural mediation and leadership involvement.

     Recent theoretical developments further link AI-enabled cognition to structured learning mechanisms. Gibson et al. (2023) propose a unified three-level model of learning—micro (individual), meso (team), and macro (organizational)—that illustrates how AI facilitates distributed intelligence across scales. Their framework positions AI as a mediating agent in learning processes, supporting exploration, feedback, and adaptation—hallmarks of effective organizational development. This perspective aligns with the view of AI as a co-creator of meaning within organizations, shaping how teams interpret information and translate insights into cultural transformation.

     Drawing on Choi’s (2025) emphasis on ethical accountability and Erengin et al.’s (2024) focus on trust as socially mediated, Kirchner et al. (2025) apply the Technology–Organization–Environment (TOE) framework to show how generative AI simultaneously functions as a facilitator and disruptor of organizational learning. Their study reveals that while AI increases efficiency and knowledge sharing, it can also challenge human agency, trust, and interpretive coherence. Within OD and consulting contexts, these findings highlight the need for ethical governance and hybrid leadership models that preserve human interpretive control over AI‑generated knowledge.

Conclusion                 

The emerging literature on artificial intelligence across organizational and educational contexts underscores the necessity of viewing AI as both a collaborative and organizing force. Munir et al. (2022) empirically demonstrate how AI augments human learning, while Stelmaszak et al. (2025) conceptualize AI as a connective and co-dependent capability. Together, these perspectives suggest that Organizational Development (OD) and consulting practices must evolve toward fostering symbiotic intelligence systems that integrate human creativity with algorithmic adaptability—cultivating a future of shared organizational agency.                                              

Artificial intelligence is therefore more than a technological advancement; it is a cultural and cognitive force redefining how organizations function. As intelligent systems evolve, they transform leadership, collaboration, and consulting practice. Successful integration requires the synthesis of emotional intelligence, ethics, and data literacy. The future of OD will depend on hybrid intelligence—where human empathy meets machine precision—and on leadership that values ethical stewardship, inclusivity, and continuous learning as foundations of thriving AI-enhanced cultures. Yaroğlu’s (2024) hermeneutic framework reinforces that human–AI collaboration is inherently interpretive: organizational meaning evolves through continuous interaction between technological mediation and human reflection. This process sustains ethical, inclusive, and adaptive cultures. Chang (2025) provides further evidence from education, showing that generative AI strengthens cooperation, self-directed learning, and motivation.

Applied to organizational settings, these findings suggest that AI can likewise reinforce team learning, creativity, and adaptive leadership. Meanwhile, Braojos et al. (2024) demonstrate that digital transformation succeeds when leadership and learning mediate the relationship between technology and human commitment. Collectively, these studies affirm that AI’s impact is both cultural and developmental—demanding intentional stewardship to align human values with technological evolution.

    The convergence of human and artificial intelligences marks a new epoch in organizational culture—one defined by symbiotic cognition and distributed creativity. As Islami and Mulolli (2024) emphasize, management functions are no longer purely human but hybrid, requiring mutual adaptation between human judgment and AI analytics. Complementarily, Grilli and Pedota (2023) remind us that creativity itself becomes a shared process—a continuous dialogue between human imagination and machine learning. Together, these perspectives affirm that the future of Organizational Development depends on cultivating hybrid intelligence—a synthesis of ethical reflection, creative exploration, and technological augmentation that drives sustainable cultural evolution.

 

References

Aakula, A., Saini, V., & Ahmad, T. (2024). The impact of AI on organizational change in digital transformation. Internet of Things and Edge Computing Journal, 4(1), 75–87.

Braojos, J., Weritz, P., & Matute, J. (2024). Empowering organisational commitment through digital transformation capabilities: The role of digital leadership and a continuous learning environment. Information Systems Journal, 34(5), 1466–1492. https://doi.org/10.1111/isj.12501

Chang, C.-C. (2025). Evaluating the impact of generative AI tools on learning outcomes, motivation, and cooperation in programming-related courses. International Journal Education and Research, 13(2), 55–66.

Choi, Y. (2025). Using AI in my disputes? Clients’ perception and acceptance of using AI in mediation. Conflict Resolution Quarterly, 0(1), 1–16. https://doi.org/10.1002/crq.214830

Eicke, A.-K., Sabel, C. A., & Nüesch, S. (2025). Strategic AI orientation and technological innovation: Evidence from managerial insights and panel data. Journal of Product Innovation Management, 0(0), 1–28. https://doi.org/10.1111/jpim.70001

Erengin, T., Briker, R., & de Jong, S. B. (2024). You, me, and the AI: The role of third-party human teammates for trust formation toward AI teammates. Journal of Organizational Behavior, 0(1), 1–26. https://doi.org/10.1002/job.2857

Gibson, D., Kovanovic, V., Ifenthaler, D., Dexter, S., & Feng, S. (2023). Learning theories for artificial intelligence promoting learning processes. British Journal of Educational Technology, 54(5), 1125–1146. https://doi.org/10.1111/bjet.13341

Grange, C., Demazure, T., Ringeval, M., Bourdeau, S., & Martineau, C. (2025). The human–GenAI value loop in human-centered innovation: Beyond the magical narrative. Information Systems Journal, 0(0), 1–23. https://doi.org/10.1111/isj.12602

Grilli, L., & Pedota, M. (2023). Creativity and artificial intelligence: A multilevel perspective. Creativity and Innovation Management, 33(2), 234–247. https://doi.org/10.1111/caim.12580

Hoßbach, C., & Isaksen, S. G. (2025). AI-augmented approaches to creative problem-solving: A metacognitive perspective. Creativity and Innovation Management, 0(0), 1–16. https://doi.org/10.1111/caim.70003

Humberd, B. K., & Latham, S. F. (2025). When AI becomes an agent of the firm: Examining the evolution of AI in organizations through an agency theory lens. Journal of Management Studies, 62(8), 1421–1445. https://doi.org/10.1111/joms.13274

Islami, X., & Mulolli, E. (2024). Human-artificial intelligence in management functions: A synergistic symbiosis relationship. Applied Artificial Intelligence, 38(1), e2439615. https://doi.org/10.1080/08839514.2024.2439615

Kirchner, K., Bolisani, E., Kassaneh, T. C., Scarso, E., & Taraghi, N. (2025). Generative AI meets knowledge management: Insights from software development practices. Knowledge and Process Management, 32(1), 1–13. https://doi.org/10.1002/kpm.70004

Law, N., Wang, N., Ma, M., Liu, Z., Lei, L., Feng, S., Hu, X., & Tsao, J. (2025). The role of generative AI in collaborative problem-solving of authentic challenges. British Journal of Educational Technology, 0(0), 1–21. https://doi.org/10.1111/bjet.70010

Munir, H., Vogel, B., & Jacobsson, A. (2022). Artificial intelligence and machine learning approaches in digital education: A systematic revision. Information, 13(4), 203. https://doi.org/10.3390/info13040203

Pawar, P., & Shah, A. H. (2024). The impact of artificial intelligence on organizational culture: A pathway to digital transformation. ShodhKosh: Journal of Visual and Performing Arts, 5(1), 2159–2172. https://doi.org/10.29121/shodhkosh.v5.il.2024.5007

Ramaul, L., Ritala, P., Kostis, A., & Aaltonen, P. (2025). Rethinking how we theorize AI in organization and management: A problematizing review of rationality and anthropomorphism. Journal of Management Studies, 0(0), 1–27. https://doi.org/10.1111/joms.13246

Russell, J., Nguyen, T., & Alvarez, M. (2025). Neural and cognitive impacts of AI: The influence of task subjectivity on human–LLM collaboration (arXiv preprint arXiv:2506.04167). arXiv. https://doi.org/10.48550/arXiv.2506.04167

Rožman, M., Tominc, P., & Milfelner, B. (2023). Maximizing employee engagement through artificial intelligent organizational culture in the context of leadership and training of employees: Testing linear and non-linear relationships. Cogent Business & Management, 10(2), 2248732. https://doi.org/10.1080/23311975.2023.2248732

Siemon, D., Elshan, E., de Vreede, T., Ebel, P., & de Vreede, G.-J. (2025). Beyond anthropomorphism: Social presence in human–AI collaboration processes. Journal of Management Studies, 0(0), 1–25. https://doi.org/10.1111/joms.70000

Smith, A., Van Wagoner, H. P., Keplinger, K., & Celebi, C. (2024). Navigating AI convergence in human–artificial intelligence teams: A signaling theory approach. Journal of Organizational Behavior, 46(1), 1–31. https://doi.org/10/1002/job.2856

Stelmaszak, M., Joshi, M., & Constantiou, I. (2025). Artificial intelligence as an organizing capability arising from human–algorithm relations. Journal of Management Studies, 62(7), 1352–1375. https://doi.org/10.1111/joms.70003

Van Giffen, B., Beitinger, G., Ludwig, H., Schiano, B., Schmidt, K., & vom Brocke, J. (2025). The culture clash of AI adoption in lean quality management: Resolving the tensions at Siemens Electronics Works Amberg. Information Systems Journal, 35(5), 987–1012. https://doi.org/10.1111/isj.70006

Vicci, H. (2024). Emotional intelligence in artificial intelligence: A review and evaluation study. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4818285

Yaroğlu, A. C. (2024). The effects of artificial intelligence on organizational culture in the perspective of the hermeneutic cycle: The intersection of mental processes. Systems Research and Behavioral Science, 41(6), 1–13. https://doi.org/10.1002/sres.3037

next

Page 1 of

next

Download Count : 19

Visit Count : 86

How to cite this article

Scoggin, J. (2025). Cognitive machines and cultural shifts: How AI redefines organizational development and consulting practice. International Journal of Behavior Studies in Organizations, 14, 50-60. https://doi.org/10.32038/jbso.2025.14.04

 

Acknowledgments

Not applicable.

 

Funding

Not applicable.

 

Conflict of Interests

No, there are no conflicting interests. 

 

Open Access

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. You may view a copy of Creative Commons Attribution 4.0 International License here: http://creativecommons.org/licenses/by/4.0/