Facebook parent Meta’s AI research team is working on developing what it calls a Self-Taught Evaluator for large language models (LLMs) that could help enterprises reduce their time and human resource requirements while developing custom LLMs.
Earlier this month, the social media giant’s AI research team, dubbed Meta FAIR, published a paper on the technology, which claims that these evaluators could help an LLM create its own training data — synthetic data — for evaluation purposes.
Typically, models that are used as evaluators, known as LLM-as-a-Judge, are trained with large amounts of data annotated by humans, which is a costly affair, and the data becomes stale as the model improves, the researchers explained in the paper.