For the ethical side of things, do you think the government could play any roles?
You already received a warning sign of the safety issue from the feedback.
Yeah. There’s the like and dislike that we press. Does it actually have an effect?
In the baseline model we use unsupervised learning or just the Internet content, but for the chatgpt side, we introduce the human labeler. How do we make sure this part is part of the known?
Curious about the safety side of thing. A lot of bias and unethical outcomes are usually found training AI data itself. Is there anything special you need, or the OpenAI did to prevent this?
兩年前回來台灣,加入一個在日本上市的台灣第一個獨角獸新創公司,是 Appier,是做 data science platform,也是被部長徵召過來,4 月開始應該就是全職在數位部。
我是怡婷,剛剛介紹過,在台灣讀書,然後去美國讀碩士,然後在微軟工作 10 年,中間有做 Bing、也有到倫敦做過 data science,那時買了一間新公司 SwiftKey,就從那邊開始我的 data science 旅程,之後回到西雅圖微軟做 Azure Cognitive Service,在 Azure 下面的一個服務。