Dancing to the tune: Bard enters the Chatbot fray
The team behind Bard AI has announced that as of March 21, they are opening access to their experimental product. Bard AI is a cutting-edge tool that allows users to collaborate with generative AI in order to produce written content in a variety of forms, including poetry, lyrics, and stories.
Lifting the curtains on Bard: USA and UK release
The product is initially being launched in the United States and the United Kingdom, with plans to expand to other countries and languages in the future. Bard AI is designed to enable users to produce creative writing that mimics the style and tone of human writers, while also responding to specific prompts and themes.
Bard is driven by a cutting-edge language model, a streamlined and highly optimized variant of LaMDA, and will be progressively updated with newer and more advanced models. Google’s expertise in accurate information informs Bard’s foundation.
In the team’s own words, taken from Tuesday’s blog post, “You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity. You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post.”
Although large language models (LLMs) are an exciting technological advancement, they do have their flaws. One significant issue is that these models learn from a broad spectrum of information, including real-world biases and stereotypes that can sometimes be reflected in their outputs. This can result in outputs that are inaccurate or perpetuate said harmful stereotypes. Additionally, LLMs can sometimes present information with an unwarranted degree of confidence, leading to the dissemination of misleading or false information.
A great example with dramatic ramifications is when during its first debut last February, Bard mistakenly claimed that the James Webb Space Telescope was the first to take a picture of a planet outside of Earth’s solar system, when in fact, the European Very Large Telescope achieved this feat in 2004. This blunder alone wiped away $100bn off the company’s market value overnight.
While confident errors are never intentional, they highlight the need for caution when using LLMs to generate content or make decisions based on their outputs. In the professional space, it’s essential that both employers and employees remain vigilant and continue to verify information through human oversight. This underscores the need for a continued emphasis on the importance of accurate information and the dangers of misinformation especially in professional settings.
Bard on the world stage – First impressions
Bard’s intuitive interface directly connects users to an LLM, and is intended to complement the Google Search experience. Bard is designed to allow users to verify its responses or conduct further research by effortlessly visiting Search. By clicking the “Google it” button, users can access suggested queries and explore relevant results on a new tab.
Bard has a polished and user-friendly interface with new features such as speech input, the ability to view drafts, and the option to Google responses.
Bard’s human-like temperament and quick response times set it apart from other AI chatbots. Its content production ability is similar to that of ChatGPT, but it rarely cites sources or check itself for redundancy.
Google’s data tracking system is evident in Bard, and it saves all the prompts but does not store the responses. Bard is conversational and feels like a virtual assistant that genuinely wants to help.
Its potential for the future is promising, and it could change the way people use search engines. However, the lack of source citations could be seen as irresponsible and should be at the top of the team’s ‘to-dos’.
Building Bard: Development and Improvements
Google acknowledges that Language Model Systems (LLMs) may sometimes generate information that is biased, misleading, or incorrect. In order to address these concerns, Google provides several options for BARD’s response, allowing you to select the most appropriate version.
You have the option to further engage with BARD by asking additional questions or requesting alternative answers.
A language model functions as a prediction generator, assembling responses word by word from probable choices. Relying solely on the most predictable option would produce uninspired output, therefore a degree of flexibility is incorporated. Usage patterns reveal that LLMs become more effective at providing helpful responses as they gain more exposure to users.
The next critical step in improving it is to get feedback from more people – Sissie Hsiao, Eli Collins (Google Blog, 21/03/23)
The team is also working on deeper integration of LLMs into Search to enhance the overall user experience, with more exciting developments to be announced soon.
Bard’s development is guided by a number of safety and responsibility AI Principles. The objectives AI applications by Google will pursue are split into 7:
- Be socially beneficial – Google endeavours to utilize AI to provide accessible and precise information of superior quality, while upholding cultural, social, and legal standards within the regions where business is conducted;
- Avoid creating or reinforcing unfair bias – Google is committed to preventing consequences from unfair bias on individuals, particularly those associated with sensitive traits like race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious convictions.
- Be built and tested for safety – Google will persist in enhancing and implementing robust safety and security protocols to prevent any inadvertent outcomes that may pose potential risks of harm
- Be accountable to people – Google’s AI technologies will be subjected to the appropriate level of human supervision and control.
- Incorporate privacy design principles – Google will offer opportunities for notification and consent, promote architectures that integrate privacy safeguards, and provide adequate transparency and control over the usage of data.
- Uphold high standards of scientific excellence – Google’s AI tools endeavour to unlock novel frontiers of scientific research and knowledge in essential fields such as biology, chemistry, medicine, and environmental sciences.
- Be made available for uses that accord with these principles – Google will endeavour to restrict potentially dangerous or abusive uses of its AI technology.
In light of the final principle, Google also highlights that it will not be pursuing a number of applications for its AI. These applications are namely harmful technologies, weapons and similar technology, technology that goes against user data safety, and technology that goes against widely accepted principles of law and human rights.
Check out the AIBC Americas Summit coming this June
Interested in the latest developments in AI, Crypto, Blockchain and other emerging tech? The AIBC Americas Summit in Brazil is the perfect opportunity to connect with LatAm’s leading innovators and policy makers, from 14-17 June.
Brazil is an emerging innovation hub in the tech sector, making it an ideal location for the summit. The country is quickly becoming a leader in tech-focused businesses with a particular focus on AI and Blockchain, which has allowed for greater levels of innovation, productivity, and socio-economic progress.
In fact, Brazil was ranked 40 out of 192 countries in Oxford Insights’ AI Readiness Index 2019 and is now considered one of the top 5 countries in the world with the fastest growth in AI hiring according to an AI Index report from Stanford University.
The AIBC Americas Summit will provide attendees with valuable insights and networking opportunities in this rapidly growing market. Don’t miss out on the chance to be part of the tech revolution in Brazil and beyond.