Beyond standard hearing care

Solutions

For hearing care professionals:  Detect hearing loss earlier than what is possible today and benefit from an easier and faster fitting process.

 

For manufacturers of hearing technologies: Stay ahead with an embedded solution that provides individualized, neural-network-based sound processing for hearables, hearing aids, cochlear implants and automatic speech recognition systems.

 

For people with hearing difficulties: Receive a precise diagnosis of your hearing problem and an individualized technical solution to maintain your desired quality of life.

Photo by Brent Kirkwood

CochSyn™ Test

Our diagnostic solution for hidden hearing loss – a patented portable EEG-based measurement system

CoNNear™ Embedded Algorithm

Our treatment solution for hidden hearing loss – tailored to the personal hearing profile of the CochSyn test

With the CochSyn™ test, we strive to diagnose (hidden) hearing loss before it can be measured by the tools and tests currently available to clinicians.

Cochlear synaptopathy is considered to be one of the early signs of hearing loss because it can occur before there is any noticeable decline in a person’s ability to hear.

Unlike other types of hearing loss, which are often caused by damage to the hair cells themselves, cochlear synaptopathy affects the connection between the hair cells and the hearing nerve. This means that the hair cells may still be intact, but the signals that they send to the brain are not as strong as they should be.

In some cases, a person with cochlear synaptopathy may be able to hear sounds just fine in a quiet environment, but have difficulty hearing or understanding speech in noisy or crowded situations.

With the ConNNear™ algorithm, we aim to improve speech understanding in noise for people with hidden hearing loss and will be able to improve the performance of voice-controlled devices.

The CoNNear™ algorithm is a convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications. The model accurately simulates human cochlear frequency selectivity and its dependence on sound intensity, which is essential for speech understanding in noisy situations.