Daily Productivity Sharing

Daily Productivity Sharing

Share this post

Daily Productivity Sharing
Daily Productivity Sharing
Daily Productive Sharing 660 - How Should AI Systems Behave?
Copy link
Facebook
Email
Notes
More

Daily Productive Sharing 660 - How Should AI Systems Behave?

Dr Selfie
Feb 24, 2023
∙ Paid

Share this post

Daily Productivity Sharing
Daily Productivity Sharing
Daily Productive Sharing 660 - How Should AI Systems Behave?
Copy link
Facebook
Email
Notes
More
Share
Daily Productive Sharing 660 - How Should AI Systems Behave?

One helpful tip per day:)

OpenAI explains why ChatGPT's output is biased and how to fix it. This is an excellent public product roadmap:

  1. ChatGPT is based on a huge neural network, and developing/iterating this product is more like training a newborn puppy.

  2. Because ChatGPT is trained on two datasets, a pre-training dataset and a fine-tuning dataset. The later one uses human intervention to ensure quality for the latter.

  3. OpenAI has clear guidelines for human intervention in fine-tuning datasets. In the future, they will continue to refine these guidelines to reduce the model's bias.

  4. They also believe that ChatGPT is just a basic model, and users can customize it according to their needs, so they will also work on this aspect.

If you enjoy today's sharing, why not subscribe

Need a superb CV, please try our CV Consultation


OpenAI 解释了为什么 ChatGPT 的输出有偏见,以及如何修者这些结果。这是一篇非常棒的公开产品路线图:

  1. ChatGPT 是基于一个巨大的神经网络构成的,开发/迭代这一产品更像是训练一只刚出生的小狗;

  2. 因为 ChatGPT 是基于 pre-training dataset 和 fine-funing dataset 两个数据集进行训练,对于后者,他们使用认为干预来保证质量。

  3. OpenAI 有明确的指导文件,用于规范 fine-funing dataset 的人为干预。未来,他们会不断地细化这些文件,以期让模型的偏见更少;

  4. 同时他们认为 ChatGPT 只是一个基础模型,而用户可以更具自己需求来定制,所以他们也会在这方面下功夫。

如果你喜欢我们的内容,不如支持我们 :)

需要更棒的简历,不妨试试我们的 CV Consultation

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Dr Selfie
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More