Mixing audio with .Net and C#

When you want to create code that can handle audio, it’s a bit complicated. Next to the logical part you have to know something about digital audio and audio devices. You develop something that can handle the windows core audio APIs or you can use an Audio library like NAudio, WaveLib or BASS audio library. 

It’s all about that BASS

The BASS audio library should be the solution for everything that must make some noise. The library itself can be found at http://www.un4seen.com/. There’s a well documented API and a forum. For the .Net-framework there’s the Base.Net-api on http://www.bass.radio42.com/, providing some add-ons.

But first, why? For my radiostation we have commercial-blocks. We don’t have a central radiostudio for the station, every DJ has it’s own set at home. With different types of radio-automation. And a server where the program elements are provided. Like the commercials. However, everybody needs to add them manually to their playlist or create a mixdown with Adobe Audition, like I do. Let’s see if I can automate that.

The base of this solution is the mixdown. BASS should be able to do that. And I was hoping this wheel was already inventend and shared on the world wide web. It probably is, but I assume somewhere hidden on page 18 of Google search results.
So lets start. It took some time to struggle through examples for MP3, mixdown to audio-device, bitrates and BASSFlags. I know, it will be obvious and great when it works, but for me it was “Go down the rabbithole”.
The first step was to play just one track. The first issue: bass.dll and bassmix.dll should be in the bin-folder – but cannot be included in the Visual Studio. Bass.Net can be included and referenced. And an example for playing just one track was not that difficult to find.

if (Bass.BASS_Init(-1, 44100, BASSInit.BASS_DEVICE_DEFAULT, 
  stream = Bass.BASS_StreamCreateFile("PasensiDemo.mp3", 0, 0, 
  if (stream != 0)
     Bass.BASS_ChannelPlay(stream, false);

Next I tried to use the mixdown. And there I struggled, the examples only mixdown to the output device, or used lame for writing an mp3 file.
To mix audio you need to load to the first audio-file to a Bass-stream-object, and add it as channels to a new ‘mixdown’ stream. Next use the second audio-file to create another Bass-stream and add it the mixdown stream.

  • But how to set the position where the second audiofile must start?
    • The example uses BASS_Mixer_StreamAddChannel, but to set a next-cue use BASS_Mixer_StreamAddChannelEx
  • The next-cue must be set in bytes? But I use seconds…
    • Bass can convert the time to bytes and back:
      var bytes = Bass.BASS_ChannelSeconds2Bytes(streamA, seconds);
      var seconds = Bass.BASS_ChannelBytes2Seconds(streamA, bytes);
  • Why do I need the stream (file) for convert bytes to seconds and back?
    • the number of bytes in one second depend on the file-format. When using a lower bitrate (8 kHz, 8 bits) and one channel (mono) wil use less bytes per second than a 44kHz, 16 bits stereo.
    • These are some properties for the audio.
      • BASS_MIXER_END: when I managed to write to file, the output was 20Gb because it continued writing (nothing) to a file when BAss was playing a song I tried to save.
      • BASS_STREAM_DECODE: this stream is not setup to play, but to be saved to a file
      • BASS_SAMPLE_FLOAT: some setting to use 32bits. It played well on the speaker, but when writing to wav it created noise.

After creating the mixdown I have a stream, that I wanted to save as a wav-file. I found some examples, but I didn’t want to write to MP3, and I didn’t got the clue that I needed to use the ‘BASS_STREAM_DECODE’-BASSFlag when creating the streams. It to some time but finally it was not that difficult: Create a WaveWriter, and while ‘playing’ chop the data in arrays of bytes and let the wavwriter write it to the file.

My Proof-Of-Concept.

One extra step I took was to bundle some data in a model for my audiosources:

public class AudioItem
  public string FileName { get; set; }
  public double Offset { get; set; }
  public double Length { get; set; }
  public double NextCue { get; set; }

I took a jingle as the audiosource and a song to mix with as a second audiosource

var audioItem1 = new AudioItem
  Length = 13,
  NextCue = 11.2,
  Offset = 0

To use Bass you must initialize:

if (Bass.BASS_Init(-1, 44100, BASSInit.BASS_DEVICE_DEFAULT, 

Next create the streams for the mixdown, and the two source files.

stream = BassMix.BASS_Mixer_StreamCreate(44100, 2,
int streamA = Bass.BASS_StreamCreateFile(audioItem1.FileName, 0, 0,
int streamB = Bass.BASS_StreamCreateFile(audioItem2.FileName, 0, 0,

Now I can add the first stream to the mixdown stream.

bool okA = BassMix.BASS_Mixer_StreamAddChannel(stream, streamA, BASSFlag.BASS_DEFAULT);

I know the start-next-cue in seconds of the first file, but I need to convert it to the number of bytes.

var startNext = Bass.BASS_ChannelSeconds2Bytes(streamA, audioItem1.NextCue);

With the startposition in bytes I can add the second stream to the mixdown.

bool okB = BassMix.BASS_Mixer_StreamAddChannelEx(stream, streamB, 

Finally I can use the WaveWriter to store the result.

var newFilename = DateTime.Now.ToString("yyyy MM dd - HH.mm.ss") 
                                                    + "_Test.wav";
short[] data = new short[32768];
using (var WW = new WaveWriter(newFilename, stream, true))
  while (Bass.BASS_ChannelIsActive(stream) == 
    int length = Bass.BASS_ChannelGetData(stream, data, 32768);
    if (length > 0)
      WW.Write(data, length);

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s