Half hour of labeling power: Can we beat GPT?

Half hour of labeling power: Can we beat GPT?

Abstract

Large Language Models (LLMs) offer a lot of value for modern NLP and can typically achieve surprisingly good accuracy on predictive NLP tasks with a reasonably structured prompt and pretty much no labelled examples. But can we do even better than that? It’s much more effective to use LLMs to create classifiers, instead of using them as classifiers. By using LLMs to assist with annotation, we can quickly create labelled data and systems that are much faster and much more accurate than using LLM prompts alone. In this workshop, we’ll show you how to use LLMs at development time to create high-quality datasets and train specific, smaller, private and more accurate fine-tuned models for your business problems.

Date
Location
New York, NY
Avatar
Ryan Wesslen
Machine learning engineer