Abstract:The value alignment of large language models is a global issue related to ensuring safe collaboration when enterprises and societies adopt these technologies. Achieving alignment between the behavior of large language models and the value intentions of decision-makers as well as societal norms is identified as the core challenge for ensuring safety and trust. Formal rationality and substantive rationality, two philosophical concepts proposed by Max Weber, were introduced to explore value alignment mechanisms. Four value alignment states in enterprise management were categorized including “high formal rationality-low substantive rationality” as technical drift, “ high substantive rationality-low formal rationality” as value prioritization, “low formal rationality-low substantive rationality” as alignment failure, and “high formal rationality- high substantive rationality ” as dynamic alignment. Transparency, clarity, and sociality were identified as analytical standards for value alignment. Pathways to achieve value alignment in enterprise management were proposed, including the embodiment of cognitive capability in the “technical drift→dynamic alignment” pathway, the clarification of technical intentionality in the “value prioritization→dynamic alignment” pathway, and the construction of meaning in the “alignment failure→dynamic alignment” pathway. The findings provide theoretical support and practical insights into the value alignment mechanisms of large language models in enterprise management.