In a bid to “deepen the public conversation about how AI models should behave,” AI company OpenAI has introduced Model Spec, a document that shares the company’s approach to shaping desired model behavior.
Model Spec, now in a first draft, was introduced May 8. The document specifies OpenAI’s approach to shaping desired model behavior and how the company evaluates trade-offs when conflicts arise. The approach includes objectives, rules, and default behaviors that will guide OpenAI’s researchers and AI trainers who work on reinforcement learning from human feedback (RLHF). The company will also explore how much its models can learn directly from the Model Spec.
The Model Spec draws on documentation used at OpenAI today, the company’s experience and ongoing research in designing model behavior, and more recent work, including inputs from domain experts, OpenAI said. The company expects the Model Spec to change over time.
Objectives of the Model Spec include assisting the developer and user, benefiting humanity, and reflecting well on OpenAI. Rules include following the chain of command, complying with applicable laws, respecting creators, protecting privacy, not responding with not-safe-for-work content, and not providing information hazards. Default behaviors include encouraging fairness and kindness, using the right tool for the job, assuming best intentions from the user or developer, expressing uncertainty, and being as helpful as possible without overstepping.
OpenAI said it views its work on the Model Spec as part of an ongoing public conversation. The company seeks opportunities to engage with globally representative stakeholders, including policymakers, trusted institutions, and domain experts, to learn how they understand the approach, if they support it, and if there are additional objectives, rules, and defaults to be considered.