Introduction to CSR8670

Posted on Posted in Embedded C/C++

CSR, even before it was acquired by Qaulcomm, is a giant in wireless audio for consumer electronics, providing a significant share of the Bluetooth enabled chips that power earphones, headphones and other audio devices. These come in two variants – ROM chips and flash chips. The ROM chips are furnished with a FW that cannot be changed, but it can be configured to a large amount in terms of external HW components, the exact features that are enabled and the user interface of the headphones (from buttons and their actions, to voice overs and sound effects). The flash chips, on the other hand, provide a fully programmable environment that allow to develop any sort of FW that is required for a product.

The CSR8670 (and its more expensive and powerful brother, the CSR8675, which shares the same architecture, code base and development tools) has a main MCU running a proprietary CSR embedded environment with code written in C (called the VM), and a DSP co-processor that is optimized for audio signal processing and is written in a proprietary language called Kalimba, which is similar to most assembly languages in syntax, but with some higher level instructions and a large pre-built library for common audio related tasks.

Audio is handled by 16 bit ADCs and DACs, as well as via I2S or PCM digital interfaces. Other HW components on this SoC include input and output amplification, Bluetooth 4 radio (using an external antenna), 3 LED controller, USB interface, SPI to an external flash memory, I2C interface, GPIOs. Overall, it really is a System on Chip for all your wireless and wired audio needs.

Overall the CSR8670 looks like the perfect SoC for anybody interested in doing advanced functionality with audio, both wired and wireless. However, CSR is known in the industry as a tight lipped company, with poor technical support and meager documentation. The acquisition by Qualcomm did little to improve this. For the embedded FW engineer trying to implement the product requirements, this is a major stumbling block. This is aggravated by a lack of external resources online on this platform, that although it is used by countless companies in their audio products, there is almost no blogs, forums or articles about how to work with this chip (aside from a few Chinese blogs and a Chinese forum that looks like it is dedicated to CSR’s products).

Initially this might be considered an irrelevant issue – one assumes that once a formal company (and not just a curious individual) approaches CSR/Qualcomm with an intent of purchasing large quantities of chips then the hidden documentations will emerge and technical know-how will flow. Sadly this can’t be farther from the truth. It is even whispered that CSR won’t consider a customer until his orders are in the millions of chips. You can still buy the chips, but don’t expect any level of real support. Yes, there is a technical support site (with mostly higher-level documents, with many crucial technical points completely missing or a decade out of date); yes, there is a technical support email (which almost never replies to the emails sent to it, or simply directs you to the local sales representative company of CSR in your area); and yes, there are some code examples (a few being utterly trivial, and a few being monstrously complex, with nothing in the middle). All of this leads to a technical nightmare for the engineer that has real and serious implications on the time-to-market.

After contacting CSR, buying an evaluation board and purchasing a support contract you will have the CSR IDE – the Audio Development Kit (or ADK). This is a rather old-style IDE, with no intelli-sense and little in the way of advanced debug capabilities. It does allow the user to watch variables (both in the C code and in the DSP), set breakpoints (only while stopped) and view the registers (more interesting for the DSP). It might be more convenient to write the code in a third-party IDE (such as IntelliJ, Eclipse or VIM/Emacs) and use the ADK just for setting configuration, compiling and running the code.

In addition to being an IDE, the ADK installation also includes all the libraries CSR provides to write FW on the CSR8670. These include both C code libraries and Kalimba code libraries for the DSP. Most of the libraries actually include the source code (applies for a large part of the C libraries as well as of the Kalimba libraries) and are compiled during the installation of the ADK (and can be recompiled if needed). Other parts of the libraries are closed-source and only the header files are included.

When starting to write you FW, there broadly are two approaches:

  1. Write your FW from scratch – in this approach you start with a blank workspace and write both the VM code and the DSP code. One can use the examples my_first_dsp_app and my_second_dsp_app to get started, but most of the code will be written afresh. This will require figuring out which APIs to use to do various operations on the FW and the HW peripherals and how to tie everything together to build the product you want.
  2. Use the provided Sink code – in this approach you start with the full-blown Sink application that is provided by CSR as the starting place for your project. The Sink application provides full functionality of an audio sink (meaning the side receiving the audio and actually playing it through speakers, as opposed to the source which pushes the audio to the sink (typically a mobile phone or PC)), with support for Bluetooth, wired connections, BLE, void prompts, buttons, LED stats, user events and much more.

It may seem that both approaches have their merits, but it is important to note that CSR’s Bluetooth stack is horribly undocumented and requires a lot of user glue-logic to work correctly with the open-sourced as well as the closed-sourced API libraries. Simply trying to begin Bluetooth advertising requires a complex flow of initialization and message handling which is not explained in any support document (other than a few misleading documents that refer to much older versions of the ADK or even to other CSR products that provide Bluetooth). It is possible to work with the Bluetooth library like that, but it requires a lot of reverse-engineering and trial-and-error and will typically leave the FW unable to easily add new features to the Bluetooth (such as supporting A2DP, HFP and AVRCP at the same time). On the other hand, the Sink application has a huge code-base, with every feasible feature implemented and controllable through run-time configurations (and some compile-time configuration as well). This means that even though the Sink code provides you with full Bluetooth functionality, it also adds a lot of other features and it is very difficult to add custom code (be it a unique user experience or advanced audio processing). The best rule-of-thumb is write code from scratch when Bluetooth is not required (typically when the audio handling is wired, either through the ADC or through the I2S) and only use the Sink application when Bluetooth is a requirement.

In the next posts we will dive into the nitty-gritty aspects of writing a FW for the CSR8670:

CSR tools and demo projects

CSR ADK Buttons configuration

62 thoughts on “Introduction to CSR8670

  1. I was wondering if it is possible to view and modify the microphone audio as it comes into the VM through the ADC. Once we call StreamConnect and send it to kalimba, it is possible to the audio samples as 24-bit numbers in kalimba’s memory. Is it possible to do this in the VM itself, before the audio is sent to kalimba (or conversely, view and modify the data that does to the DAC, in VM)?

    1. The VM actually doesn’t take a part in the audio processing, and for a good reason – the DSP is optimized to handle the audio, while the VM is not. The VM wouldn’t be able to handle the audio samples fast enough. Typically you would implement all of your audio handling logic in the DSP (assuming that the logic is feasible to implement there in Kalimba). If that logic requires some input from the VM you can pass parameters or configuration via messages from the VM to the DSP, and get some data back from the DSP via message from the DSP to the VM (but that is very low-bandwidth).

  2. Hi thanks for your reply and thanks for making this blog. I have a question regarding the ADK4.0.1 Source project if you don’t mind.

    First some background:
    I downloaded the Source project in one CSR8675 Development board (H13478v2+H23223v1) and the Sink project in another. I merged the source_analogue.psr pskey configuration, and I was able to pair the two boards together. If I connect output from a music player into “Line In” in the Source board, I can hear music from the earphones in the Sink board, so presumably A2DP streaming is happening correctly.

    However, what I need to do is stream the mic input from the Source board to the Sink board. If we look at the my_first_dsp_app program, we can see that the following lines of code “enable” Mic In

    #ifdef MIC_INPUT
    PanicFalse( SourceConfigure(audio_source_a, STREAM_CODEC_MIC_INPUT_GAIN_ENABLE, 1) );
    PanicFalse( SourceConfigure(audio_source_b, STREAM_CODEC_MIC_INPUT_GAIN_ENABLE, 1) );

    PanicFalse(MicbiasConfigure(MIC_BIAS_0, MIC_BIAS_ENABLE, MIC_BIAS_FORCE_ON));
    PanicFalse(MicbiasConfigure(MIC_BIAS_1, MIC_BIAS_ENABLE, MIC_BIAS_FORCE_ON));

    If these lines are removed the audio input automatically comes from Line In

    My question is, is it possible to use Mic In (both left and right channels) as my audio input in ADK4.0.1 Source project, instead of Line In? The my_first_dsp_app program clearly defines the variables representing the left and right channel ADC as audio_source_a and audio_source_b respectively, but I cannot find anything similar to this in the Source project.

    How can I change the audio input from Line In to Mic Nn in Source project? Is this possible? If not, why? And if yes, how can I do it?

    1. I started writing a reply, but realized that a proper post is in order. Until then, the main thing to remember is that the CSR8675 has only one stereo analog input, so both MIC input and LINE input end up in the same place inside the CSR. The difference is mostly in configuration and the PCB itself.

      1. Thanks a lot
        So what kind configuration should I be searching for in the Source project to make it take input from Mic In (or is this impossible?). Please provide a hint if possible until the post is ready.

        Again thanks for the great blog.

      2. Thank you for the hint, because I managed to get the Mic In input to work in the Source project, but I am still looking forward to your post so that I can compare what I did with your method.

        Meanwhile I might need to start work on connecting the sink project with a smartphone using BLE and USB, so I was wondering if you could share any ideas on those. The Sink program can be used as a USB microphone and speaker with a PC, but I was wondering if this would be feasible with an android (or apple) smartphone and tablet, assuming the smartphone has the appropriate android version and OTG capabilities. Would it be feasible to stream audio to and from smartphone using USB, or is it not possible due to driver issues?

      3. I am sorry but I have another question if you don’t mind. Does the kalimba processor in 8675 and 8670 have something similar to the VM’s printf() function? I want to be able to view kalimba memory and register values in the debug window, or in the PC if it is possible to do something like this using the SPI port while debugging. I would like to be able to call a function in kalimba, then have kalimba register or memory values displayed, just as I would do using printf() in the VM.

        1. Mostly you may know this, the matlab debug APIs can do more than printf(). there is a detailed document from CSR/QTIL on this. pl ref the support documentation.

    1. Sadly, no. CSR only provide SPI capabilities for an external flash memory (up to 64 Mbits supported). There is no other API to use the SPI lines.

  3. Hi Eli!

    Shaun here from! We are about to launch a KICKSTSTARTER campaign 12-12-2017 using the CSR8675 solution for our Vehroot Shelf. You are right about limited support from, I have been able to glean some information and guidance occasionally but its sparse. We our launching with the BC127 HD originally by now a product. The BC-127 HD module sits atop the CSR8675 and uses there firmware ( Melody ) to set commands and settings and we are very happy with the simplified firmware. But launching a company with a module that uses its own firmware to communicate with the CSR8675 is a bit worrisome! I have jumped thorough Qulacomm’s NDA hoops and purchased the BlueDev license to get legit access to the ADK and have begun the process with various CSR8675 development boards to start learning how to program the CSR8675 using the ADK. A daunting process to say the least! Ultimately I think will be time well spent with how powerful and feature rich the CSR8675 is specifically for our needs… cVc noise cancelation and the quality of the AptX HD codec for just a start.
    My question for you is if you have a resource or starting guide of some of the basics with CSR8675?
    An overview of the RICK ( CSR8675 ) Firmware, what each program in the ADK does?

    We would be willing to contract the person that can help me get started and possibly work as an ongoing support agreement…

    At any rate thanks for the write up on the modules… Ill keep my eye on this space!

  4. For a wired application with Qualcomm’s CVC in the DSP; is there anything preventing the UART or I2S being used instead of Bluetooth if we write our own VM firmware?

    1. The problem with the CVC specifically is that it’s mostly closed source (since it requires a license to use). Hence it is much more difficult to make any alterations to the regular flow. Since CVC is used for HSP, the DSP image is also in charge of the SCO decoding, before the CVC enhancements are ran. There might be some official or unofficial way of supporting this, however given CSR/Qaulcomm’s lack of technical support, I would assume it would be a long struggle (with the very real potential of this being an impossible task).

  5. Excellent article! I too have found CSR/Qualcomm a nightmare to work with. However, they seem to definitely be the leader in this space. I have two questions:

    1) Do you know of any similar Bluetooth Audio SoC solutions from other “more friendly” chip makers? I’ve had very little luck finding anything that compares to CSR8675.

    2) The 64Mb external flash memory is quite limiting, especially for products that need to store a lot of songs, etc. I need to interface to a SD card with a few GB for storing songs. My plan is to add a second MCU (Cortex-M) to interface with the SD card. This second MCU will then interface with the CSR8675 via either USB or UART. Any thoughts on this strategy?

    Thanks again for an awesome post!

    1. To answer your questions:
      1. Sadly, no other SoC offers the full capabilities of the CSR8670/5. Depending on the exact requirements of the product, a suitable alternative could be found, but would typically involve more than a single chip. For example, simple Bluetooth only headphones could use a Bluetooth chip outputting I2S and an I2S DAC and amplifier. In addition, using separate components can lead to better sound quality and potentially simpler FW development (if using well supported chips), at the expense of more complex HW design and probably cost as well.
      2. With that requirement, I would prefer using I2S between the MCU and the CSR chips. It is more suitable as an interconnect for passing audio and allows greater flexibility in the CSR side (since an I2S input is a simple input, as opposed to a USB input (or Bluetooth, for that matter) that is actually a very complex beast in the CSR FW. With regard to UART, I don’t think it’s very suitable for passing the audio, especially with the poor support from CSR’s APIs for handling large amounts of data from the UART to the DSP for playback.

      1. Excellent reply, thank you so much! After posting my question it occurred to me that I2S would be a much better way to pass audio, so I’m glad you confirmed my thought.

        You have some great info here and I plan to link to your site in the near future.

        Best wishes!

        1. As an alternative you can pass control of the SPI PIO’s to the Kalimba and then use it to bit bang communication out to the flash. With separate bootmodes you can decide whether the Kalimba or VM have control of the PIO’s which still leaves the opportunity to do a DFU down the road.

  6. “In the next posts we will dive into the nitty-gritty aspects of writing a FW for the CSR8670.”

    Did you ever get around to writing that post?

    1. Sadly not yet. If you have some specific questions I’d be happy to help, though. Writing a post is much harder than answering questions.

  7. Hi
    Sorry this is is not about 8670/75 but I was hoping you might have some ideas on this

    Qualcomm recently announced the QCC5100 series chips in Jan. 2018, and looking at the specifications they have all the features of 8670/75, plus some improvements (dual core processor, power reduction, etc). Is a development board or any information about this chip available at all? Is it a replacement for the 8670/75, or will the two series of chips be developed side by side?

    1. That’s actually a very good question. The CSR8670/75 are very old chips, and CSR/Qualcomm have been keeping them up-to-date well (the latest ADK supports Bluetooth 5).
      As far as I understood from some Qualcomm sales presentations, the new chips are meant to replace the old CSR8670/75. It will probably take a good long while to fully drop support and manufacturing of the older chips, but they probably won’t get any updates or fixes.

  8. hi! I am working with csr8675 too. Now I am trying to connect csr8675 to another MCU by BLE, csr8675 as peripheral and another MCU is connected to HM-10 ( bluetooth 4.0 module) as central. But I dont know it is possible to communicate btw 4.0 as central and 4.2 as peripheral? if possible, then is it easy to implement on csr8675?
    Thank you 🙂

    1. Yes, Bluetooth in general and BLE in particular have great backwards and forwards compatibility, so you shouldn’t have any problems doing that connection over BLE.

      With regard to how simple it is, BLE is certainly simpler than classic Bluetooth in the CSR FW, but it does have its learning curve. The Sink application is a good place to start to get a feel of how the BLE is defined, initialized and used. The built-in profile, such as the Battery Service profile, are fully opened source in the ADK installation (under src/lib/gatt_battery_server for the Battery Service, for example) and can be used for inspiration to how to write your own custom BLE profile. The BLE definition is done using a special .db file, and an example is available in the Sink demo application.

  9. Hi, im using QCC5121 with ADK 6.2. I have a question regarding the media volume control. with iOS i see that AVRCP absolute volume control is used to trigger volume change on QCC Board. However while using Android the volume is changed however i do not see any debug message printed and im kind a confused really what could be the key trigger for this volume change. There is no GAIA running. Im printing all messages from A2DP/AVRCP etc but it never shows any function called unlike iOS but output volume gain is still changed, Can you suggest what could be the trigger point for volume in this case?

    1. Single volume control is not consistent in Android. It was only introduced in Android 6, and even then some phones managed to break it.
      On the other hand, iOS had Bluetooth single volume control since the very first versions, so all iOS devices will work as expected.

      When single volume control is not available, you’ll have two separate volume controls – one on the phone (by lowering the volume on the digital A2DP signal) and one in the chip (typically via digital volume control in the DSP, but can also be implemented via the output gain. So, when you change the volume on the phone, the chip doesn’t get notified, and vice-versa (changing volume on the chip doesn’t notify the phone). This can lead to poor user experience (with maximal volume on the headphones and minimal volume on the phone, for example), but can’t really be avoided with some of the Android phones.

      1. I’m not sure what you’re asking exactly, but if you mean if the QCC FW is in XAP format, then yes. However, it is built using the new ADK 6 and the MDE, not the old ADK 4.

  10. Hi,

    I have been using CSR8670 development board which supports BT v4.0 I need to know whether the same device can be updated to BT v5.0 or is there any updated CSR8670 board in the current market for purchase. We have been using this board for several BT testing purpose.

    1. CSR added Bluetooth 5 support starting from ADK4.2. The HW is the same, it is a pure toolchain and FW change.

      1. Hi,

        Thanks for the reply. So the board is upgradable to BT v5.0?

        If so where can i upgrade it? Could you please let me know.

  11. CSR8670
    Does anybody know how to use the “sample_rate_converter” to change 8KHz to 24KHz sampling rate which goes to the I2S bus? The I2S has to work at 24KHz. The ADK4.3.1.5 and ADK4.3.0 has a sample_rate_converter.asm in the C:\ADK_CSR867x.WIN4.3.1.5\apps\source file area.

    1. The sample_rate_converter is configured from M.setup_resampler using the src_operator_lookuptable table. Knowing CSR, it would take quite a bit of fiddling to get it to work correctly, especially with an up-sample that is not one of the default values (also considering the relative complexity of good quality resamplers), but it should work.

  12. We are currently using a CSR8670 to bring in audio through the A/D converters at 24KHz. The audio is then routed through the Kalimba and then to the I2S port in. The I2S is in slave mode. We have all the source C code for the above part of the project.

    We now wish to add Blue Tooth capability to the system. The above system is hooked to a long-range radio thru the I2S bus. We want to be able to use Blue Tooth headsets instead of wired headsets.
    This means that our CSR8670 will need to be in source mode for Bluetooth applications. Also since Bluetooth only works in 8KHz or 16KHZ modes, we need to use a “sample_rate_converter”. Since most noise canceling algorithms are also based on 8KHz or 16KHz, we wish to switch to a system that is 8KHz based.
    1. Put our CSR8670 system into source Bluetooth mode so that it will be the master which will add in other Blue Tooth headsets. The whole Bluetooth system will have to be added.
    2. Set up all sampling rates to work at 8KHz for the A/D converters and the Bluetooth Headsets.
    3. Use a “sample_rate_converter” to change the 8KHz to 24KHz sampling rate which goes to the I2S bus. The I2S has to work at 24KHz. The ADK4.3.1.5 and ADK4.3.0 has a sample_rate_converter.asm in the C:\ADK_CSR867x.WIN4.3.1.5\apps\source file area. You may have to come up with the filter coefficients for the upsampling and downsampling.
    4. Integrate this with our current C program that does volume control and side tone control.
    5. Add additional controls to switch between A/D inputs for the audio or Bluetooth.

    Who would I contact to take on this type of help?

    1. The plan you outlined seems reasonable based on your description of the requirements. It should be doable with some tweaks to the regular “source” application.

      With regard to the engineer to implement it, you’d need to find somebody knowledgeable in the CSR ecosystem, especially with modifying the Kalimba code, to make the modification you need.

  13. Hi my question is about serial communication between CSR8675 chip and the PC
    In the sink project properties there is an entry called transport which is set to BCSP by default
    I have already been able to achieve serial communication in my custom VM application using raw serial port data (for example, sending a single byte 0xa0 from my program on PC and displaying it in the xIDE print output tab), but I was wondering if this is the right approach
    If I understand how this works correctly, setting BCSP in the transport means I would have to encapsulate whatever I want to send to the CSR chip in the BCSP header and trailer, for example, if I want to send 0xa0, I would have to do something like [BCSP header]0xa0[BCSP trailer]
    Is this how serial communication between CSR chip and PC is supposed to happen? Through BCSP? Or is my method of using raw serial port data the correct approach?

    1. It depends on what you are trying to achieve.
      In general, you’d typically not use the serial communication except for development (which might very well be what you are trying to do), since it’s not well suited for production communication (due to the same limitations as other chip peripherals, i.e. lack of full control from the FW).
      If you do want to use the serial communication, raw serial would probably be the best choice. BCSP is used for communicating with the ADK (or the VMSpy), and it consumed first by the CSR VM. Since you already have raw serial communication working, I’d advise sticking to it, unless you have some problems with that.

  14. This question is about the audio resolution in CSR8675 using ADK4.1

    The help file that come with the installation of ADK4.1 says this:

    “ADK4.1 now supports 24Bit resolution for audio inputs and outputs, details of the feature are described in the Audio Sink Application User Guide, under section 10.3.”

    Section 10 of the 8675 datasheet says
    “Figure 10.1 shows the functional blocks of the interface. ≤he codec supports stereo/dual-mono playback and recording of audio signals at multiple sample rates with a 24-bit resolution. ”
    I believe the “codec” refers to the built-in ADC and DAC

    I can’t remember exactly where right now, but I am sure I have seen it mentioned in other documents as well, that 8675 supports 24-bit audio

    However, the VM memory has a data bus width of 16 bits. In the my_first_dsp_app program, when audio samples from ADC are sent to kalimba, they are 16-bit data in VM (for example, xxyy) which become zero padded to 24-bits after they reach kalimba (example, 00xxyy). In a similar manner, when audio in kalimba has to be sent back to the VM to be routed to the DAC, 24-bit data is first converted to 16 bit and then sent through a WRITE port.

    So what does it mean when it says 8675 supports 24 bit audio? Is it only the digital inputs like SPDIF or I2S that support 24-bit audio? I have no idea about SPDIF, I2S, digital microphone or other audio ports, but I have worked with the ADC and DAC audio ports, and I cannot think of any way how it can allow me to read in 24 bits of audio data into the VM.

    What does it mean when the documents say that 8675 supports 24-bit audio, and is there a sample program availabe for this?

  15. Hello
    This question is about the CSRA68100 series of chips

    I only have experience working with 8670/8675 so please bear in mind that my understanding of CSR chips is limited to those two
    Looking at some documents about the CSRA68100 chip in the CSR support website, it seems this chip is similar to the 8675 chip, in that it has some MCU (some type of VM possibly based on XAP like the 8675 perhaps?) and kalimba DSP
    However, it apparently uses ADK 6.0 (which, when searching for it on Google, doesn’t return any results) and has a dual core kalimba processor
    In fact, the more I look at the features of the CSRA68100 with what limited documentation I have availabe to me, it seems it’s features are similar to the features Qualcomm has released for the QCC5100 series of chips

    Do you have any information regarding what the relationship between the 8675 and the CSRA68100 chip is? What I mean is, we know that 8675 is an upgraded version of 8670. Everything that works in 8670 works in 8675, with 8675 having a few more features available. Can a comparison be made between 8675 and the CSRA68100? Are the scope of applications for the two chips entirely different? Looking at the documents, both seem to be applicable for making bluetooth speaker and headsets. Then looking at the more features available for the CSRA68100, wouldn’t this chip be a better choice?

    However, a google search shows more results for 8675 than CSRA68100. Is this simply because 68100 is relatively newer? For making headsets, what is the trade off in using CSRA68100 over the 8675? Does the 68100 have any relationship with the QCC5100 series? I’m sorry for the long post but I hope you can shed some light onto this. Thank you.

    1. I would say only Qualcomm will be answer this question fully. From looking at the available information about both the CSRA68100 and the QCC5100, it looks like they both are pretty equivalent. Both offer substantially improved DSP capabilities and ADC/DAC performance (at least on paper). The shift to ADK 6 is a complete replacement for the older ADK, and both of the newer chips use it. Hazarding a guess, these two chip are actually the same one, with two different names – one in the original CSR nomenclature (the CSRA68100) and the other with the new Qualcomm nomenclature (the QCC5100).

      With regard to the difference between the CSR8670/75 and the new CSRA68100 and QCC5100, it’s important to note that the CSR8670/75 are extremely old designs – they originate in the early 2000’s, making them almost 20 years old. Although CSR (and later Qualcomm) have been diligent in providing FW support for newer Bluetooth protocols (and indeed the CSR8670/75 supports Bluetooth 5 with the latest ADK), the age of the design is showing when looking at Bluetooth performance, peripheral support and ADC/DAC real world performance. The new chips seem to be addressing all of these limitations – the MCU and DSP are significantly faster, the FW architecture is greatly revised to allow greater control to the custom user code and the ADC/DAC have been revised, supporting greater bit depth and sampling rates (and hopefully also sounding well in the real world).

      Although existing designs using the CSR8670/75 can probably continue using it (these chips are responsible for a large percentage of Bluetooth headphones manufactured today), I would imagine that new designs should use the new chips. The decision between the CSRA68100 and the QCC5100 can probably be resolved by contacting your Qualcomm supplier to get concrete information and datasheets for the two chips, as well as comparing prices between the two.

    2. Were you able to determine if the 32bit MCU in the CSRA68100 chip is of type XAP?
      If it is not, then what type is it?

      Thank you.

      1. I’m not sure what you’re asking exactly, but if you mean if the CSRA6810x FW is in XAP format, then yes. However, it is built using the new ADK 6 and the MDE, not the old ADK 4.

        1. I think he is asking whether the MCU is XAP processor or not. As far as I know, in CSR8675 the MCU is 16-bit XAP architecture (probably XAP 4) and the DSP is 24 bit Kalimba architecture. I’m not sure about the CSRA68000, but in the QCC5100 series they seem to have done away with the XAP architecture and use kalimba’s architecture for both MCU and DSP (I’m only guessing this from reading documents).

          1. I see, that makes sense. I honestly don’t know, though. I would guess that it can be gleaned from the internal working of the MCU compiler, based on the assembly reference.

  16. Hello
    My question is about “Faults” in CSR 8675
    I am currently getting this error when I download my_first_dsp_app sample program in my CSR module (custom made, but based on sample schematics from CSR datasheet)

    “11:56:43.539 Fault 0x004d: ‘no_reference_clock’ : : No system reference clock was detected”

    I only know this because it opens up a “Fault” tab in the xIDE when I download the program and run it while the xIDE is running in debug mode
    Otherwise the program seems to be running fine

    What is this fault, and why is it happening? This board was running fine yesterday and as far as I know I haven’t changed anything on it
    This “Fault” message is showing up in my custom VM-kalimba project as well as the supplied my_first_dsp_app sample project

    I would also like to ask you what these “Fault” messages mean in general
    I have seen fault messages before, mainly one caused by the USB-SPI connector not being detected correctly by xIDE. and other faults I have seen have usually caused the CSR program to stop running
    I am assuming that if I just ignore this fault message, it might cause some unexpected problem down the line

    What are these fault messages and how can I track down their cause? Can they be caused by problems on the CSR module hardware itself (eg, bad circuit design), or are they software related (errors in VM/kalimba program, bad pskeys)
    In this particular case I have erased pskeys to the default values and the error is still happening, even in the my_first_dsp_app sample program

    1. I’ve never seen that particular fault, but here’s what I can suggest when the CSR starts to act squirrelly:
      Try erasing the entire chip, not just the PS region.
      Try unplugging the USB to SPI converter, as well as power cycling the board and restarting the computer. The driver for the USB to SPI sometimes gets stuck in a problem state, one that even disconnecting the USB to SPI converter doesn’t resolve.
      Try with a different board (which will rule out HW issues with your specific PCB), and a quick test with the evaluation board (which will rule out any FW or environment issues).

      You’ve already preformed the basics (testing with a simple FW and erasing the PS configuration), so try the above steps.

      In general, the CSR chips have a tendency to act out, especially with custom HW, but also with the evaluation boards. The above steps typically help, but I’ve had my share of dead HW that the CSR stopped working correctly

  17. how configured adk4.2 led color, by XML table ?

    state **** On Time **** Off Time **** Repeat delay **** Led A map **** Led B map

    1. I’ve only used the LEDs manually – I found the LED generation to be too limiting (especially around fades).
      Still, LED color is controlled by selecting the logical LED you want to use – the CSR chip is intended to be connected to an RGB LED, and the specific HW connection to the 3 LED lines of the CSR chip determine what is the color of each logical LED. This also allows for color combinations (although my experience with this is that without a good light pipe color combinations don’t look too good).

  18. Hi,
    I am looking into setting up a demo project using the CSR8675 with digital mics (first two, then three) and outputting the data on an I2S port. I have the clock running for the mics but I have not been able to route the signal to the I2S port. I have tried to search the docs but haven’t found that much about digital mics.

    1. Is the input automatically filtered when selecting digital mics as input?
    2. How do I output the mic data on the I2S port (I have configured source and sink and called StreamConnecct)?

    I was wondering whether there is an example somewhere but I have only been some with analog mics.

    Any help will be appreciated,

    1. I honestly have no experience with digital microphones with the CSR chip. I can offer some leads, though:
      If you route the microphone source into Kalimba, and the Kalimba into the I2S sink, you will be able to isolate the issue – if Kalimba shows no signal in the audio_in_left/right buffer, you know that the mics are misconfigured somehow. If the input data looks correct, the problem is with the I2S configuration.
      I can say that I2S is pretty finicky to configure. Are you using the CSR in Master mode? If so, make sure that all the configuration is compatible with the I2S slave (L/R justification, sampling rate, bit width, MSB delays, etc.). Slave configuration is much more forgiving, but still has a few gotchas.

      1. I have tried to define source and sink and connect streams as shown in examples. I did manage to find some documentation showing the the digital mic signal is first converted to a 3 bit signal, then sent through an IIR filter and a gain step before being sent into the Kalimba. Also, I did notice that there was a configuration error for the digital mics but correcting this did result in anything on the I2S port (still silent). For the I2S I have selected to set that in master mode as this is my output port with the clock to be derived. The I2S setup has been taken from the ‘my_first_24bit_dsp_app’ example, which I would assume is correct (?).
        I have taken a look at the input buffer in the Kalimba and it does seem to receive data (and this data is copied to the output buffer). I will try to dig into the I2S to see if I can see anything wrong. But being new to Kalimba it is not so straight forward…

        1. If the DSP seems to be running and copying data correctly, I would test the output some more.
          You can try using the DAC output (perhaps using the evaluation board) to verify 100% that all the input and processing pipelines are working correctly. This would also allow you to listen to the output – perhaps the entire flow is working well, but the volume level is too low to be heard using the I2S HW.
          To further diagnose the I2S, testing the HW lines with an oscilloscope is probably your best bet. Verify that the BCLK and LRCLK look good and are in the right frequencies, and that you have bits on the SDATA. Typically I2S (being a digital interface) is quite boolean – it either doesn’t work at all or it works correctly.

          1. The main problem is that the I2S is dead silent. No clock or anything else present. You are right, maybe it could a good idea to try with DAC to see if that works.

  19. Hi,
    Got it to work now using the CSR8675 eval board. Previously I was using our own eval board, which apparently has a layout flaw. And this was the reason why I2S was completely dead.
    Thank you for your assistance.

    1. Glad to have helped. I2S is very finicky, and in one case we have to go through 4 revisions of the HW before getting it right.

  20. Hello I would like to ask about the AAC decoder add-on in ADK 4.1 for the CSR8675

    The add-on installs CSR’s AAC decoder kalimba library as well as it’s source code, but there are no sample codes explaining how to start using them
    I am guessing this is supposed to replace the default SBC decoder that gets invoked when streaming music from smartphone to CSR module through A2DP?

    What I would really like to do is use the CSR AAC decoder code as my basis, and use it to decode AAC files which I will send to kalimba’s memory through other means (for example, hard-coding AAC frame data in kalimba memory). For this I will need to understand how to supply the AAC frame data (a sequence of bytes) to the AAC decoder library’s source code, and possibly modify the decoder’s source code itself so that it takes input from my designated memory area instead of from the bluetooth input port. I have the following questions:

    1. In your opinion, how feasible do you think it would be to do this, given that they have supplied the full kalimba source code for the decoder but no other information on how to start using it?

    2. This question is more about bluetooth in general. I would like to make a sink application including the AAC decoder library add-on (I will include the full source code for the AAC decoder instead of including it’s library file). I want to transfer 1 or 2 frames of an AAC file I made, from smartphone to the CSR module running this sink application, then debug and single step through the AAC decoder’s source code to try and understand how it is processing the AAC frame data. However, I realized that I have no idea how to make a smartphone start sending audio using AAC codec (normally while streaming through A2DP the smartphone and CSR module automatically choose to use SBC codec). What do I need to do in order to make a smartphone send audio data through A2DP as AAC frames instead of SBC frames, to the CSR module? I have only worked with android smartphones and CSR applications based on the default sink project, and I have only seen A2DP happen through SBC. Would streaming a .aac file from an iphone make the streaming happen through AAC codec?

    1. It is fairly straightforward to use the AAC decoder to play non-Bluetooth sources. For example, the MP3 decoder can be used to play MP3 voice prompts files directly, unrelated to Bluetooth.
      Basically you just load the AAC decoder DSP, and hook it up with a file source and DAC sink.
      With regard to playing AAC through Bluetooth, I agree that most phones default to SBC, and honestly I never tried using something other than SBC (with the exception of aptX-HD, but that’s quite a different story, both on the headphones and on the phone). I would say that playing an AAC file would be easier for debugging than setting up an AAC Bluetooth stream.

      1. Could you give me some pointers on how to go about starting to play AAC from a file (instead of from bluetooth) please? I looked inside the aac decoder source code folder and it looks like the function in frame_decode.asm is what I need, but it requires the $codec.DECODER_STRUC structure as an input and I am not sure how to set that up.

        I searched some more and it looks like the codec_decoder.asm used in the sbc_decoder kalimba project, part of the default sink project, is what sets up the codec for music streaming in A2DP, but using the sbc_decoder kalimba project will probably involve the rest of the VM C files as well, which I suppose have been set up assuming audio input will be from bluetooth, while I just want to decode aac frames stored somewhere in kalimba’s memory completely ignoring the bluetooth part.

        What can I use as a starting point for reading AAC frames as a file in kalimba’s memory? I have an old CSR document from 2008 that talks about a “test_aacdecoder” sample project in Bluelab 4.1, and I have a copy of Bluelab 4.1.2 as well, but that sample application is not there, nor any files related to aac.

        1. It’s pretty involved to explain in a comment. You can leave a comment with your contact details and I’ll get back to you with a more information.

Leave a Reply

Your email address will not be published. Required fields are marked *