Home/Prompting

Prompting

Tips, techniques, and best practices for prompt engineering across all OpenAI models.

Topic

I'm trying to finetune a model to be better at function calling for my specific use case but I'm struggling with the training data format. The docs s

Training loss going down doesn't mean your finetuned model is actually better. I've learned this the hard way. Evaluation framework I use 1. Tasksp

We ran the numbers on finetuning GPT4omini vs using fewshot prompting with GPT4o for our classification task 10K requests/day. Option A: Fewshot GPT

I finetuned GPT4omini to generate SQL queries for our specific database schema and the results are impressive. Sharing my approach. Dataset 3,200 q

OpenAI recently added Direct Preference Optimization DPO to the finetuning API. I've been testing it for preference alignment and here are my first im

I'm finetuning GPT4omini on a customer service dataset 5000 examples and seeing performance peak at epoch 23, then degrade significantly. Training m