Introduction to CSR8670

Posted on Posted in Embedded C/C++

CSR, even before it was acquired by Qaulcomm, is a giant in wireless audio for consumer electronics, providing a significant share of the Bluetooth enabled chips that power earphones, headphones and other audio devices. These come in two variants – ROM chips and flash chips. The ROM chips are furnished with a FW that cannot be changed, but it can be configured to a large amount in terms of external HW components, the exact features that are enabled and the user interface of the headphones (from buttons and their actions, to voice overs and sound effects). The flash chips, on the other hand, provide a fully programmable environment that allow to develop any sort of FW that is required for a product.

The CSR8670 (and its more expensive and powerful brother, the CSR8675, which shares the same architecture, code base and development tools) has a main MCU running a proprietary CSR embedded environment with code written in C (called the VM), and a DSP co-processor that is optimized for audio signal processing and is written in a proprietary language called Kalimba, which is similar to most assembly languages in syntax, but with some higher level instructions and a large pre-built library for common audio related tasks.

Audio is handled by 16 bit ADCs and DACs, as well as via I2S or PCM digital interfaces. Other HW components on this SoC include input and output amplification, Bluetooth 4 radio (using an external antenna), 3 LED controller, USB interface, SPI to an external flash memory, I2C interface, GPIOs. Overall, it really is a System on Chip for all your wireless and wired audio needs.

Overall the CSR8670 looks like the perfect SoC for anybody interested in doing advanced functionality with audio, both wired and wireless. However, CSR is known in the industry as a tight lipped company, with poor technical support and meager documentation. The acquisition by Qualcomm did little to improve this. For the embedded FW engineer trying to implement the product requirements, this is a major stumbling block. This is aggravated by a lack of external resources online on this platform, that although it is used by countless companies in their audio products, there is almost no blogs, forums or articles about how to work with this chip (aside from a few Chinese blogs and a Chinese forum that looks like it is dedicated to CSR’s products).

Initially this might be considered an irrelevant issue – one assumes that once a formal company (and not just a curious individual) approaches CSR/Qualcomm with an intent of purchasing large quantities of chips then the hidden documentations will emerge and technical know-how will flow. Sadly this can’t be farther from the truth. It is even whispered that CSR won’t consider a customer until his orders are in the millions of chips. You can still buy the chips, but don’t expect any level of real support. Yes, there is a technical support site (with mostly higher-level documents, with many crucial technical points completely missing or a decade out of date); yes, there is a technical support email (which almost never replies to the emails sent to it, or simply directs you to the local sales representative company of CSR in your area); and yes, there are some code examples (a few being utterly trivial, and a few being monstrously complex, with nothing in the middle). All of this leads to a technical nightmare for the engineer that has real and serious implications on the time-to-market.

After contacting CSR, buying an evaluation board and purchasing a support contract you will have the CSR IDE – the Audio Development Kit (or ADK). This is a rather old-style IDE, with no intelli-sense and little in the way of advanced debug capabilities. It does allow the user to watch variables (both in the C code and in the DSP), set breakpoints (only while stopped) and view the registers (more interesting for the DSP). It might be more convenient to write the code in a third-party IDE (such as IntelliJ, Eclipse or VIM/Emacs) and use the ADK just for setting configuration, compiling and running the code.

In addition to being an IDE, the ADK installation also includes all the libraries CSR provides to write FW on the CSR8670. These include both C code libraries and Kalimba code libraries for the DSP. Most of the libraries actually include the source code (applies for a large part of the C libraries as well as of the Kalimba libraries) and are compiled during the installation of the ADK (and can be recompiled if needed). Other parts of the libraries are closed-source and only the header files are included.

When starting to write you FW, there broadly are two approaches:

  1. Write your FW from scratch – in this approach you start with a blank workspace and write both the VM code and the DSP code. One can use the examples my_first_dsp_app and my_second_dsp_app to get started, but most of the code will be written afresh. This will require figuring out which APIs to use to do various operations on the FW and the HW peripherals and how to tie everything together to build the product you want.
  2. Use the provided Sink code – in this approach you start with the full-blown Sink application that is provided by CSR as the starting place for your project. The Sink application provides full functionality of an audio sink (meaning the side receiving the audio and actually playing it through speakers, as opposed to the source which pushes the audio to the sink (typically a mobile phone or PC)), with support for Bluetooth, wired connections, BLE, void prompts, buttons, LED stats, user events and much more.

It may seem that both approaches have their merits, but it is important to note that CSR’s Bluetooth stack is horribly undocumented and requires a lot of user glue-logic to work correctly with the open-sourced as well as the closed-sourced API libraries. Simply trying to begin Bluetooth advertising requires a complex flow of initialization and message handling which is not explained in any support document (other than a few misleading documents that refer to much older versions of the ADK or even to other CSR products that provide Bluetooth). It is possible to work with the Bluetooth library like that, but it requires a lot of reverse-engineering and trial-and-error and will typically leave the FW unable to easily add new features to the Bluetooth (such as supporting A2DP, HFP and AVRCP at the same time). On the other hand, the Sink application has a huge code-base, with every feasible feature implemented and controllable through run-time configurations (and some compile-time configuration as well). This means that even though the Sink code provides you with full Bluetooth functionality, it also adds a lot of other features and it is very difficult to add custom code (be it a unique user experience or advanced audio processing). The best rule-of-thumb is write code from scratch when Bluetooth is not required (typically when the audio handling is wired, either through the ADC or through the I2S) and only use the Sink application when Bluetooth is a requirement.

In the next posts we will dive into the nitty-gritty aspects of writing a FW for the CSR8670.

27 thoughts on “Introduction to CSR8670

  1. I was wondering if it is possible to view and modify the microphone audio as it comes into the VM through the ADC. Once we call StreamConnect and send it to kalimba, it is possible to the audio samples as 24-bit numbers in kalimba’s memory. Is it possible to do this in the VM itself, before the audio is sent to kalimba (or conversely, view and modify the data that does to the DAC, in VM)?

    1. The VM actually doesn’t take a part in the audio processing, and for a good reason – the DSP is optimized to handle the audio, while the VM is not. The VM wouldn’t be able to handle the audio samples fast enough. Typically you would implement all of your audio handling logic in the DSP (assuming that the logic is feasible to implement there in Kalimba). If that logic requires some input from the VM you can pass parameters or configuration via messages from the VM to the DSP, and get some data back from the DSP via message from the DSP to the VM (but that is very low-bandwidth).

  2. Hi thanks for your reply and thanks for making this blog. I have a question regarding the ADK4.0.1 Source project if you don’t mind.

    First some background:
    I downloaded the Source project in one CSR8675 Development board (H13478v2+H23223v1) and the Sink project in another. I merged the source_analogue.psr pskey configuration, and I was able to pair the two boards together. If I connect output from a music player into “Line In” in the Source board, I can hear music from the earphones in the Sink board, so presumably A2DP streaming is happening correctly.

    However, what I need to do is stream the mic input from the Source board to the Sink board. If we look at the my_first_dsp_app program, we can see that the following lines of code “enable” Mic In

    #ifdef MIC_INPUT
    PanicFalse( SourceConfigure(audio_source_a, STREAM_CODEC_MIC_INPUT_GAIN_ENABLE, 1) );
    PanicFalse( SourceConfigure(audio_source_b, STREAM_CODEC_MIC_INPUT_GAIN_ENABLE, 1) );

    PanicFalse(MicbiasConfigure(MIC_BIAS_0, MIC_BIAS_ENABLE, MIC_BIAS_FORCE_ON));
    PanicFalse(MicbiasConfigure(MIC_BIAS_1, MIC_BIAS_ENABLE, MIC_BIAS_FORCE_ON));

    If these lines are removed the audio input automatically comes from Line In

    My question is, is it possible to use Mic In (both left and right channels) as my audio input in ADK4.0.1 Source project, instead of Line In? The my_first_dsp_app program clearly defines the variables representing the left and right channel ADC as audio_source_a and audio_source_b respectively, but I cannot find anything similar to this in the Source project.

    How can I change the audio input from Line In to Mic Nn in Source project? Is this possible? If not, why? And if yes, how can I do it?

    1. I started writing a reply, but realized that a proper post is in order. Until then, the main thing to remember is that the CSR8675 has only one stereo analog input, so both MIC input and LINE input end up in the same place inside the CSR. The difference is mostly in configuration and the PCB itself.

      1. Thanks a lot
        So what kind configuration should I be searching for in the Source project to make it take input from Mic In (or is this impossible?). Please provide a hint if possible until the post is ready.

        Again thanks for the great blog.

      2. Thank you for the hint, because I managed to get the Mic In input to work in the Source project, but I am still looking forward to your post so that I can compare what I did with your method.

        Meanwhile I might need to start work on connecting the sink project with a smartphone using BLE and USB, so I was wondering if you could share any ideas on those. The Sink program can be used as a USB microphone and speaker with a PC, but I was wondering if this would be feasible with an android (or apple) smartphone and tablet, assuming the smartphone has the appropriate android version and OTG capabilities. Would it be feasible to stream audio to and from smartphone using USB, or is it not possible due to driver issues?

      3. I am sorry but I have another question if you don’t mind. Does the kalimba processor in 8675 and 8670 have something similar to the VM’s printf() function? I want to be able to view kalimba memory and register values in the debug window, or in the PC if it is possible to do something like this using the SPI port while debugging. I would like to be able to call a function in kalimba, then have kalimba register or memory values displayed, just as I would do using printf() in the VM.

        1. Mostly you may know this, the matlab debug APIs can do more than printf(). there is a detailed document from CSR/QTIL on this. pl ref the support documentation.

    1. Sadly, no. CSR only provide SPI capabilities for an external flash memory (up to 64 Mbits supported). There is no other API to use the SPI lines.

  3. Hi Eli!

    Shaun here from Vehroot.com! We are about to launch a KICKSTSTARTER campaign 12-12-2017 using the CSR8675 solution for our Vehroot Shelf. You are right about limited support from techsupportus@qca.qualcomm.com, I have been able to glean some information and guidance occasionally but its sparse. We our launching with the BC127 HD originally by bluecreation.com now a sierrawireless.com product. The BC-127 HD module sits atop the CSR8675 and uses there firmware ( Melody ) to set commands and settings and we are very happy with the simplified firmware. But launching a company with a module that uses its own firmware to communicate with the CSR8675 is a bit worrisome! I have jumped thorough Qulacomm’s NDA hoops and purchased the BlueDev license to get legit access to the ADK and have begun the process with various CSR8675 development boards to start learning how to program the CSR8675 using the ADK. A daunting process to say the least! Ultimately I think will be time well spent with how powerful and feature rich the CSR8675 is specifically for our needs… cVc noise cancelation and the quality of the AptX HD codec for just a start.
    My question for you is if you have a resource or starting guide of some of the basics with CSR8675?
    An overview of the RICK ( CSR8675 ) Firmware, what each program in the ADK does?

    We would be willing to contract the person that can help me get started and possibly work as an ongoing support agreement…

    At any rate thanks for the write up on the modules… Ill keep my eye on this space!

  4. For a wired application with Qualcomm’s CVC in the DSP; is there anything preventing the UART or I2S being used instead of Bluetooth if we write our own VM firmware?

    1. The problem with the CVC specifically is that it’s mostly closed source (since it requires a license to use). Hence it is much more difficult to make any alterations to the regular flow. Since CVC is used for HSP, the DSP image is also in charge of the SCO decoding, before the CVC enhancements are ran. There might be some official or unofficial way of supporting this, however given CSR/Qaulcomm’s lack of technical support, I would assume it would be a long struggle (with the very real potential of this being an impossible task).

  5. Excellent article! I too have found CSR/Qualcomm a nightmare to work with. However, they seem to definitely be the leader in this space. I have two questions:

    1) Do you know of any similar Bluetooth Audio SoC solutions from other “more friendly” chip makers? I’ve had very little luck finding anything that compares to CSR8675.

    2) The 64Mb external flash memory is quite limiting, especially for products that need to store a lot of songs, etc. I need to interface to a SD card with a few GB for storing songs. My plan is to add a second MCU (Cortex-M) to interface with the SD card. This second MCU will then interface with the CSR8675 via either USB or UART. Any thoughts on this strategy?

    Thanks again for an awesome post!

    1. To answer your questions:
      1. Sadly, no other SoC offers the full capabilities of the CSR8670/5. Depending on the exact requirements of the product, a suitable alternative could be found, but would typically involve more than a single chip. For example, simple Bluetooth only headphones could use a Bluetooth chip outputting I2S and an I2S DAC and amplifier. In addition, using separate components can lead to better sound quality and potentially simpler FW development (if using well supported chips), at the expense of more complex HW design and probably cost as well.
      2. With that requirement, I would prefer using I2S between the MCU and the CSR chips. It is more suitable as an interconnect for passing audio and allows greater flexibility in the CSR side (since an I2S input is a simple input, as opposed to a USB input (or Bluetooth, for that matter) that is actually a very complex beast in the CSR FW. With regard to UART, I don’t think it’s very suitable for passing the audio, especially with the poor support from CSR’s APIs for handling large amounts of data from the UART to the DSP for playback.

      1. Excellent reply, thank you so much! After posting my question it occurred to me that I2S would be a much better way to pass audio, so I’m glad you confirmed my thought.

        You have some great info here and I plan to link to your site in the near future.

        Best wishes!

        1. As an alternative you can pass control of the SPI PIO’s to the Kalimba and then use it to bit bang communication out to the flash. With separate bootmodes you can decide whether the Kalimba or VM have control of the PIO’s which still leaves the opportunity to do a DFU down the road.

  6. “In the next posts we will dive into the nitty-gritty aspects of writing a FW for the CSR8670.”

    Did you ever get around to writing that post?

    1. Sadly not yet. If you have some specific questions I’d be happy to help, though. Writing a post is much harder than answering questions.

  7. Hi
    Sorry this is is not about 8670/75 but I was hoping you might have some ideas on this

    Qualcomm recently announced the QCC5100 series chips in Jan. 2018, and looking at the specifications they have all the features of 8670/75, plus some improvements (dual core processor, power reduction, etc). Is a development board or any information about this chip available at all? Is it a replacement for the 8670/75, or will the two series of chips be developed side by side?

    1. That’s actually a very good question. The CSR8670/75 are very old chips, and CSR/Qualcomm have been keeping them up-to-date well (the latest ADK supports Bluetooth 5).
      As far as I understood from some Qualcomm sales presentations, the new chips are meant to replace the old CSR8670/75. It will probably take a good long while to fully drop support and manufacturing of the older chips, but they probably won’t get any updates or fixes.

  8. hi! I am working with csr8675 too. Now I am trying to connect csr8675 to another MCU by BLE, csr8675 as peripheral and another MCU is connected to HM-10 ( bluetooth 4.0 module) as central. But I dont know it is possible to communicate btw 4.0 as central and 4.2 as peripheral? if possible, then is it easy to implement on csr8675?
    Thank you 🙂

    1. Yes, Bluetooth in general and BLE in particular have great backwards and forwards compatibility, so you shouldn’t have any problems doing that connection over BLE.

      With regard to how simple it is, BLE is certainly simpler than classic Bluetooth in the CSR FW, but it does have its learning curve. The Sink application is a good place to start to get a feel of how the BLE is defined, initialized and used. The built-in profile, such as the Battery Service profile, are fully opened source in the ADK installation (under src/lib/gatt_battery_server for the Battery Service, for example) and can be used for inspiration to how to write your own custom BLE profile. The BLE definition is done using a special .db file, and an example is available in the Sink demo application.

  9. Hi, im using QCC5121 with ADK 6.2. I have a question regarding the media volume control. with iOS i see that AVRCP absolute volume control is used to trigger volume change on QCC Board. However while using Android the volume is changed however i do not see any debug message printed and im kind a confused really what could be the key trigger for this volume change. There is no GAIA running. Im printing all messages from A2DP/AVRCP etc but it never shows any function called unlike iOS but output volume gain is still changed, Can you suggest what could be the trigger point for volume in this case?

    1. Single volume control is not consistent in Android. It was only introduced in Android 6, and even then some phones managed to break it.
      On the other hand, iOS had Bluetooth single volume control since the very first versions, so all iOS devices will work as expected.

      When single volume control is not available, you’ll have two separate volume controls – one on the phone (by lowering the volume on the digital A2DP signal) and one in the chip (typically via digital volume control in the DSP, but can also be implemented via the output gain. So, when you change the volume on the phone, the chip doesn’t get notified, and vice-versa (changing volume on the chip doesn’t notify the phone). This can lead to poor user experience (with maximal volume on the headphones and minimal volume on the phone, for example), but can’t really be avoided with some of the Android phones.

Leave a Reply

Your email address will not be published. Required fields are marked *