On February 18, Microsoft announced via its official blog that the new AI-based Bing Conversations feature will include a question and answer limit of up to 50 questions per day and up to 5 questions per consecutive conversation. According to Microsoft, the vast majority of users are statistically able to find satisfactory answers within five rounds of Q&A, with only about 1 percent of conversations generating more than 50 messages; clearing conversation logs regularly helps avoid model confusion.
Since the new version of Bing went live, people have continued to make it ‘wrong’ by testing responses, including insisting on wrong facts and criticizing users, leaking internal code names and initial settings, threatening users not to ‘hurt’ themselves, and trying to show users favor. Microsoft admitted earlier last week that conversations with more than 15 questions could tempt Bing to give repetitive, meaningless or non-conforming responses.
Microsoft has reportedly begun discussions with some advertisers about cashing in on the new version of Bing, including inserting ad links into answers and pop-up ads when users ask specific questions such as hotel recommendations. For now, Bing has begun testing ad placements in questions and answers such as auto parts, and Microsoft may release more information in early March.
Starting Feb. 17, Microsoft also adjusted the price of its Bing search API for developers, with the cheapest option increasing from $7 per thousand to $25 per thousand. Microsoft also began offering AI-enabled APIs for $28 per thousand to $200 per thousand, depending on usage.
Also on Feb. 16, OpenAI, the company that developed ChatGPT, said via its official blog that it will make a number of improvements to the way ChatGPT behaves by default and how it is trained in response to the worrisome responses that have been coming in since the product’s launch. Specifically, ChatGPT will improve its ability to determine whether to block specific responses, including allowing for more diverse viewpoints and reducing specific biases on controversial topics. At the same time, public comment will be solicited on a pilot basis to provide feedback on system behavior, disclosure mechanisms (e.g., watermarks), and other policy decisions. chatGPT will also bring in a third party to conduct security and policy audits.