Project/Customgpt context Optimization

I'm currently creating a project, or rather a custom GPT, for creating blog posts, etc., and I have all sorts of analyses and other data about my site as context (almost 100,000 tokens, which is way too much). How can I optimize or compress this? I'm worried about simply telling it to extract the most important information so that it comes out normal. Does anyone have any experience with this? Thanks!

Leave a Reply