TWS Chip Update - ADA100

TWS Chip Update - ADA100

Publish Time: Author: Site Editor Visit: 42

TWS earphones are developing rapidly, and the market is paying more and more attention to the interaction between earphones and users, such as touch modules, which use capacitive sensors or optical sensors to perform some operations such as switching music, pausing playback, etc.; some manufacturers aim at smart The voice module controls the headset through voice to achieve more convenient interaction.
In the intelligent voice module market, various manufacturers are constantly carrying out technological innovations, pursuing more extreme performance and smaller sizes, and also considering issues such as product development cycle and power consumption. Under the influence of these factors, major manufacturers have successively released their own chips with more powerful performance.

ADA100 can be widely used in wearable devices, mobile phones, IoT, smart home appliances, and other fields to help products achieve intelligent, high wake-up rate voice control. The ADA100 integrated structure chip of Jiutian Rui core, storage, and computing is mainly used in intelligent speech recognition. It detects in real-time whether the current audio contains speech signals, distinguishes the speech signals from various background noise signals, and processes the two signals differently.
It is worth mentioning that the ADA100 is a processing chip at the back end of the sensor and needs to be used with the sensor. Therefore, the application range of ADA100 is extremely wide. It can not only be used in the field of intelligent speech recognition, but can also be processed in cooperation with sensors such as pressure sensors, vibration sensors, gravity sensors, thermal sensors, and current detection. Realize functions such as pressure monitoring, abnormal movement alarm, overheating alarm, power consumption tracking, and gravity sensing monitoring. At present, the sensor-memory-computing integrated chip ADA100 has entered the mass production stage with sufficient production capacity. Many brands have adopted and introduced it for testing. It is expected to see actual product applications in the second half of 2022.

Interpretation of the three characteristics of ADA100

1.Sense-memory-calculate integration
The sound signal is collected by the pickup microphone, and then the analog domain feature extraction is performed through the analog preprocessing (ASP) to generate the feature signal, and the collected feature signal is passed through the neural network processor (NPU) with the VAD&KWS algorithm for speech detection and keyword recognition
Built-in memory, support 32KB OTP, built-in 64KB SRAM

2. Ultra-low power consumption
Under the typical power consumption of ADA100, VAD mode <70uA, KWS mode <170uA, the power consumption is only one-fifth to one-tenth of similar products

3. Ultra Small Package
ADA100 supports two package sizes
QFN 2.5mm*2.5mm*0.55mm
WLCSP 1.5mm*1.5mm*0.45mm

ADA100 Algorithm Specifications

ADA100 adopts VAD and KWS algorithms, supports single VAD, single KWS, and VAD+KWS modes, among which the KWS algorithm can recognize up to 30 keywords, and more voice operations can be set; it also supports VAD+user_KWS mode (supports user extraction If the user wants to change and replace the parameters in the VAD and KWS algorithms, it can be loaded through the I2C or SPI interface of the ADA100 chip.

Combining the analog mic of the microphone with the ADA100 can identify the sound field in the ear, so as to realize the adaptive adjustment of volume and sound effect; it can also identify the external sound field, so as to make adaptive noise reduction and reduce noise reduction consumption. ADA100 also has the recognition of 30 keywords and can use voice control to perform human-computer interaction functions such as phone connection, volume control, and music switching, which is faster and more convenient.