Builder Role
How to become a builder on the Synesis One AI training platform
Human Language
Human language is complex and nuanced. Words can have ambiguous meanings. There's sarcasm, double-entendres, and innuendo. Sometimes we speak volumes without saying a word, communicating instead through a wry grin or a raised eyebrow. Humans often miss these subtle cues. Imagine then how challenging it must be for a computer to understand humans, especially when you throw in misspellings, emojis, and slang?
Building a Better Bot
Big Techās approach to this problem is based on the premise that the more processing power and the larger the data set, the more proficient and āintelligentā our AI systems will become. Though skilled at detecting patterns in human language, algorithms based on a machine learning approach fall short in natural language understanding (the ability to determine intent). Thatās because they ignore the logic embedded in human language and rely instead on statistical computations to generate predictive approximations of human intention. In contrast, the Mind AI reasoning engine learns through contextualization and abstract reasoning, just like humans do. This āCanonicalā approach gives the Mind AI engine an edge in working out human intention, which is why Mind AI decided to focus on conversational AI for its first commercial application.
The Mind AI reasoning engine can reason and draw logical conclusions based on inputs. For example, if I teach the AI that āmy phone battery is deadā is a cause for āmy phone wonāt turn onā, then the AI will understand that a phone needs power from its battery to function. It will know this when it encounters the same issue in the future. To understand human requests across multiple domains, the Mind AI reasoning engine needs a robust database built on natural language inputs. To build this database, Mind AI teamed up with Synesis One to crowdsource the knowledge (through our train2earn App) the AI needs to create a āmental mapā of the world. That's where you come in!
How train2earn works
Our architects create campaigns (based on client needs) to crowdsource the domain-specific knowledge needed to answer customer queries. Next, Builders choose which campaigns to work on and use their creativity to come up with āutterancesā (from linguistics, the smallest unit of speech) to express every possible way that someone might make a particular query in a given domain. Finally, the Validators review and either reject or approve each submission.
Builders whose utterances are approved are paid in SNS tokens. In this way, over time, the Mind AI reasoning engine expands its āmental mapā of the world, which will help it contextualize human queries and make logical inferences about human intentions.
Building a Mental Map
The Mind AI engine classifies sentence utterances into three types: specific, general, and entailment. These categories are rooted in linguistic theory but also map on to Mind AIās unique three-node data structure (called canonicals) that the engine uses to establish logical relations between ontologies (see technical white paper).
Mind AIās linguists have determined that we need approximately 300 good quality, validated utterances with pattern diversity per topic for the AI to grasp the various ways humans might express a given query. Crafting utterances that pass validation and improve the AI can be challenging, especially when youāre just getting started. To help you increase your success rate in train2earn, weāve put together the following Builder Guidelines.
Last updated