OpenAI

What is CHAI AI's Chaiverse?

Retrieved on: 
Saturday, February 17, 2024

PALO ALTO, Calif., Feb. 17, 2024 /PRNewswire/ -- A lot of Chai users have been asking the question, "What is Chaiverse?" a feature currently obscurely advertised by the Chai developers. With a recent valuation of $450M, the Chai AI team has been pouring their funding into the development of Chaiverse, with the aim of connecting world-class Large Language Model (LLM) developers directly to millions of Chat AI consumers, offering each user a tailor-made combination of LLMs. This article provides a deep dive into the Chaiverse mechanism, how it works, and its implications for users.

Key Points: 
  • Unlike other Generative AI products, which typically offer models developed in-house, Chai AI has recently introduced their model developer platform, Chaiverse .
  • Chai AI's research team purport to be lazer-focussed on exploring what makes each Large Language Model (LLM) unique and to improve person-to-LLM recommendations.
  • After the model is operational, users of the Chai App can engage with it through arena mode, providing immediate numerical and textual feedback to developers.
  • A: Yes, the Chai App primarily features interactions with AI-driven bots, trained to mimic human-like conversations and behaviors.

UiPath Announces New Developer Features at DevCon 2024 to Bring Latest in AI-powered Productivity to Developer Community

Retrieved on: 
Friday, February 16, 2024

Our quest to reimagine DevOps for automation developers with AI led us to the creation of UiPath Autopilot,” said Munil Shah, Chief Technology Officer, UiPath Automation Cloud.

Key Points: 
  • Our quest to reimagine DevOps for automation developers with AI led us to the creation of UiPath Autopilot,” said Munil Shah, Chief Technology Officer, UiPath Automation Cloud.
  • Availability of new India data center: UiPath Automation Cloud expands globally with new data centers, including in India, as of April 2024.
  • To listen to the expert presentations from UiPath DevCon 2024, register for the recorded version of the event here .
  • The UiPath Academic Alliance program is working with FutureSkills Prime, other partners, and UiPath customers to equip 500,000 Indians with AI and automation skills by 2027.

Guardrails AI is Solving the LLM Reliability Problem for AI Developers With $7.5 Million in Seed Funding

Retrieved on: 
Thursday, February 15, 2024

GenAI is unlocking new workflows for AI systems to augment humans in a way that has never been done before.

Key Points: 
  • GenAI is unlocking new workflows for AI systems to augment humans in a way that has never been done before.
  • Guardrails’ safety layer surrounds the AI application and is designed to enhance the reliability and integrity of AI applications via validation and correction mechanisms.
  • With Guardrails Hub, developers can:
    Build validators: developers can create advanced validation techniques tailored to specific safety, compliance and performance requirements of AI applications.
  • "With Guardrails AI, we see not just a company but a movement towards securing AI's future in enterprise.

Kong Open Sources New AI Gateway to Help Developers Easily Build Multi-LLM Apps

Retrieved on: 
Thursday, February 15, 2024

SAN FRANCISCO, Feb. 15, 2024 /PRNewswire/ -- Kong Inc., a leading developer of cloud API technologies, today announced a suite of open-source AI plugins for Kong Gateway 3.6 that can turn any Kong Gateway deployment into an AI Gateway, offering unprecedented support for multi-Language Learning Models (LLMs) integration. By upgrading to Kong Gateway 3.6, available today, users can access a suite of six new plugins that are entirely focused on AI and LLM usage. This will enable developers who want to integrate one or more LLMs into their products to be more productive and ship AI capabilities faster, while at the same time offering architects and platform teams a secure solution that ensures visibility, control and compliance on every AI request sent by the teams. Due to the tight integration with Kong Gateway, it will now be possible to easily orchestrate AI flows in the cloud or on self-hosted LLMs with industry leading performance and low latency, which are critical to the performance of AI-based applications.

Key Points: 
  • By upgrading to Kong Gateway 3.6, AI builders can access this new suite of plugins entirely focused on AI and LLM usage.
  • Central AI Credential Management: The "ai-proxy" helps ensure secure and centralized storage of AI credentials within Kong Gateway.
  • Comprehensive AI Egress with Extensive Features: The integration of these AI capabilities within Kong Gateway centralizes the management, security, and monitoring of AI traffic.
  • The AI Gateway is equipped from day one with all Kong Gateway features, making it, we believe, the most capable in the AI ecosystem.

Dstillery Integrates Trailblazing Ad Targeting Solutions with Cadent

Retrieved on: 
Thursday, February 15, 2024

NEW YORK, Feb. 15, 2024 /PRNewswire/ -- Dstillery ("the Company"), the leader in AI ad targeting, today announced its audience solutions are now integrated with Cadent, the largest independent solutions provider for converged TV advertising. The partnership brings Dstillery's patented data science to brands and agencies leveraging Cadent Aperture Platform for identifying target audiences, activating media, and analyzing campaigns to make the most of their media investments.

Key Points: 
  • NEW YORK, Feb. 15, 2024 /PRNewswire/ -- Dstillery ("the Company"), the leader in AI ad targeting, today announced its audience solutions are now integrated with Cadent , the largest independent solutions provider for converged TV advertising.
  • Dstillery's ID-free is a first-of-its-kind targeting technology representing a paradigm shift in programmatic advertising, prioritizing performance and privacy.
  • "Integrating Dstillery's innovative AI data solutions will allow us to continue to support our advertising clients' need for performance and precision targeting."
  • Dstillery has been widely honored by industry publications and associations, including AdExchanger, Ad Age, Adweek, Business Intelligence Group, Fast Company, and IAB Tech Lab.

CHAI AI: A Top Platform for Conversational Artificial Intelligence

Retrieved on: 
Wednesday, February 14, 2024

PALO ALTO, Calif., Feb. 14, 2024 /PRNewswire/ -- In the dynamic landscape of AI-powered content generation, OpenAI stands at the forefront of Artificial General Intelligence (AGI) research. However, the primary purpose of products such as GPT-4 is to act as a productivity aid, for tasks such as coding. In addition, access to the best such models is often restricted by paywalls. These limitation have led to the growing popularity of other platforms which are primarily focused on providing Generative AI for consumers.

Key Points: 
  • On the consumer side, the main use case for Generative AI is for conversational purposes, with the leading platform, TikTok, amassing over 1 billion monthly active users.
  • The largest emerging players are Character.AI and Chai AI, reporting 20 million and 5 million monthly active users, respectively.
  • One of the main differentiators setting TikTok, Chai AI , and Character AI apart is their platform-centric approach to Chat AI.
  • Unlike other Generative AI products, which typically offer models developed in-house, Chai AI has recently introduced their developer platform, Chaiverse .

Media Alert: Intel to Provide Updates on Foundry Business and Process Roadmap at IFS Direct Connect

Retrieved on: 
Wednesday, February 14, 2024

Intel announced today that it will provide updates on its foundry business and process roadmap at IFS Direct Connect, Intel’s flagship foundry customer event on Feb. 21 in San Jose, California.

Key Points: 
  • Intel announced today that it will provide updates on its foundry business and process roadmap at IFS Direct Connect, Intel’s flagship foundry customer event on Feb. 21 in San Jose, California.
  • View the full release here: https://www.businesswire.com/news/home/20240214150438/en/
    Intel will provide updates on its foundry business and process roadmap at IFS Direct Connect, Intel’s flagship foundry customer event on Feb. 21 in San Jose, California.
  • (Credit: Intel Corporation)
    The opening keynote will begin at 8:30 a.m. PST and feature Pat Gelsinger, CEO of Intel, and Stuart Pann, senior vice president and general manager of Intel Foundry Services.
  • More: To learn more, visit the Intel Newsroom and follow along on social media with @IntelNews and @Intel on X, formerly Twitter, and Intel on LinkedIn .

Upwork Reports Fourth Quarter and Full Year 2023 Financial Results

Retrieved on: 
Wednesday, February 14, 2024

SAN FRANCISCO, Calif., Feb. 14, 2024 (GLOBE NEWSWIRE) -- Upwork Inc. (Nasdaq: UPWK), the world’s largest work marketplace that connects businesses with independent talent from across the globe, today announced its financial results for the fourth quarter and full year of 2023.

Key Points: 
  • SAN FRANCISCO, Calif., Feb. 14, 2024 (GLOBE NEWSWIRE) -- Upwork Inc. (Nasdaq: UPWK), the world’s largest work marketplace that connects businesses with independent talent from across the globe, today announced its financial results for the fourth quarter and full year of 2023.
  • “Last year proved Upwork’s continued growth momentum and strong profitability.
  • Our business is flexible and resilient, as the skilled talent on Upwork are a critical resource to businesses small and large,” said Hayden Brown, president and CEO, Upwork.
  • The speed with which we strategically shifted to mid-teens adjusted EBITDA margins—taking just two quarters—reflects the strong operating leverage and agility of our business,” said Erica Gessert, CFO, Upwork.

The New York Times’ AI copyright lawsuit shows that forgiveness might not be better than permission

Retrieved on: 
Tuesday, February 13, 2024

Martin and John Grisham have also brought legal cases against ChatGPT owner OpenAI over copyright claims.

Key Points: 
  • Martin and John Grisham have also brought legal cases against ChatGPT owner OpenAI over copyright claims.
  • But the NYT case is not “more of the same” because it throws interesting new arguments into the mix.
  • The legal action focuses in on the value of the training data and a new question relating to reputational damage.
  • It is a potent mix of trade marks and copyright and one which may test the fair use defences typically relied upon.

Fair use?

  • The challenge for this type of attack is the fair use shield.
  • In the US, fair use is a doctrine in law that permits the use of copyrighted material under certain circumstances, such as in news reporting, academic work and commentary.
  • Anticipating some of the difficulties that such a fair use defence could potentially cause, the NYT has adopted a slightly different angle.
  • This introduction of some aspect of commercial competition and commercial advantage seems intended to head off the usual fair use defence common to these claims.
  • It will be interesting to see whether the assertion of special weighting in the training data has an impact.
  • If it does, it sets a path for other media organisations to challenge the use of their reporting in the training data without permission.
  • This case will be watched closely by other media publishers, especially those behind paywalls, and with particular regard to how it interacts with the usual fair use defence.


Peter Vaughan does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Artificial intelligence needs to be trained on culturally diverse datasets to avoid bias

Retrieved on: 
Tuesday, February 13, 2024

The capabilities of LLMs have developed into quite a wide range, from writing fluent essays, through coding to creative writing.

Key Points: 
  • The capabilities of LLMs have developed into quite a wide range, from writing fluent essays, through coding to creative writing.
  • LLMs are trained by reading massive amounts of texts and learning to recognize and mimic patterns in the data.
  • Because the internet is still predominantly English — 59 per cent of all websites were in English as of January 2023 — LLMs are primarily trained on English text.

Model bias

  • By default, ChatGPT followed the North American standard of a 15 to 25 per cent tip, ignoring the Spanish norm not to tip.
  • It’s unclear if this capability emerged from training a newer version of the model on more data — after all, the web is full of tipping guides in English — or whether OpenAI patched this particular behaviour.
  • Again, ChatGPT likely assumed they were invited for a standard North American 6 p.m. dinner.
  • A similar phenomenon is encountered when asking DALL-E 3, an image generation model trained on pairs of images and their captions, to generate an image of a breakfast.
  • This model, which was trained on mainly images from Western countries, generated images of pancakes, bacon and eggs.

Impacts of bias

  • Just like cross-cultural human interactions can lead to miscommunications, users from diverse cultures that are interacting with conversational AI tools may feel misunderstood and experience them as less useful.
  • As more people rely on LLMs for editing writing, they are likely to unify how we write.

Decision-making and AI

  • AI is already in use as the backbone of various applications that make decisions affecting people’s lives, such as resume filtering, rental applications and social benefits applications.
  • Lack of cultural awareness may lead to AI perpetuating stereotypes and reinforcing societal inequalities.

LLMs for languages other than English

  • First, there is a huge population of English speakers outside of North America who are not represented by English LLMs.
  • Second, many users whose native language is not English still choose to use English LLMs.
  • Due to either a lack of availability of LLMs in their native languages, or superior quality of the English LLMs, users from diverse countries and backgrounds may prefer to use English LLMs.

Ways forward

  • Our research group at the University of British Columbia is working on enhancing LLMs with culturally diverse knowledge.
  • Together with graduate student Mehar Bhatia, we trained an AI model on a collection of facts about traditions and concepts in diverse cultures.
  • Our future research will go beyond teaching models about the existence of culturally diverse concepts to better understand how people interpret the world through the lens of their cultures.
  • With AI tools becoming increasingly ubiquitous in society, it is imperative that they go beyond the dominating western and North American perspectives.


Vered Shwartz does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.