SoundFloored: Open Source Soundboard Pedal (Part 2 – Software Implementation)

This is the second post in a series about designing and creating SoundFloored, an open source soundboard pedal! Check out the other posts in the series:

So now that I’ve prepared what I can, it’s now the step that can often derail the whole project: implementation.

For context the “Plan” section of this post was written before I started writing any code and “Implementation” was written shortly after I’d put the first version together.


I know, I know, I’ve already done the planning in the first part, “when are you actually going to start writing code“?

Well, except for tiny projects (usually single file, one-off scripts) I like to spend some time considering how the different parts will interact. The reasoning is that as soon as you start working with multiple modules and classes you’ve really got to consider how the parts will hang together, including what should “own” which parts of the logic, what the different interfaces should be and how the code should be laid out.

So what are the parts that I’m going to need? SoundFloored is going to be a Raspberry Pi connected to physical buttons that play audio clips, change banks etc. In the simplest implementation, I could just write all of the logic in one file that sets up the Raspberry Pi (configuring which buttons are connected to which GPIO pins), maintains the state of which bank I have selected and executes the calls to PyGame to actually make the audio play. In a (very) simple diagram, this might look as follows.

A diagram on a whiteboard. There is a green box with the following written inside it: "Raspberry Pi Config" (in blue), "SoundFloored Logic" (in green) and "PyGame" (in red).

Functional? Probably, but what what happens when I want to run it not from a Raspberry Pi, but from my computer keyboard? This isn’t just a theoretical need either; developing against a Raspberry Pi involves a few more moving parts than running it on my local machine using the tooling that I’m used to (such as Visual Studio Code), so it would be great to write the core logic on my desktop using my keyboard to control it. How would a second interface work with the existing design? Let me add it to the diagram.

A diagram on a whiteboard. There are two green boxes stacked vertically. The first has the following written inside it: "Raspberry Pi Config" (in blue), "SoundFloored Logic" (in green) and "PyGame" (in red). The second is identical except for the first line, which reads "Keyboard Config" instead.

Here I’ve got a very tight coupling to PyGame; if I wanted to change what actually plays the audio in the background (either because PyGame isn’t practical or I later find something with better features) I’d have to go through and modify every call and potentially change logic completely if I relied upon how PyGame is designed. This design also duplicates a lot of the SoundFloored specific logic across multiple interfaces such as reading settings, loading audio clips and maintaining bank state. This problem will only become more evident as I consider more interfaces.

So clearly this isn’t the best way to go. What I’m looking for is better encapsulation and greater decoupling: encapsulation will help keep the logic separate and mean that the internal implementation of each part can change as long as the interfaces remain consistent, while decoupling will mean that switching out sections of the program won’t be nearly as impactful since they won’t be so tightly linked. So how could this look in practice?

A diagram of the SoundFloored Architecture. There is one long red box at the bottom with the text "PyGame" inside. Above that is a single green box that says "SoundFloored Logic", with three seperate side-by-side blue boxes above it that say "Raspberry Pi", "Gui" and "Keyboard" respectively.

In this design I’ve separated out the different sections, with each section only being able to communicate with those directly above or below it. The specific input interfaces are now distinct and only in charge of setting up and configuring what is relevant to that implementation, such as registering the inputs (physical buttons on a Raspberry Pi or the individual keys on a keyboard for example) and configuring what logic needs to be executed on the SoundFloored layer when those inputs fire. This logic is best kept with the interfaces since it allows us to take into account quirks or other specificities of the particular interface. Is a button only considered pressed when held down for a few seconds, or does it need to be pressed twice? What about handling switch debouncing in code? None of these should affect how the SoundFloored logic layer is written, so we can keep all of that where it’s most relevant.

The SoundFloored logic layer in the middle will be beneficial for a few reasons; first of all, it acts as an excellent place to keep SoundFloored specific logic (such as rotating through banks, loading songs etc.) as well as acting as an interface with PyGame. Instead of having the Raspberry Pi interface call PyGame directly to start playing a clip for example, the interface can request for the SoundFloored logic layer to play a clip. This might seem like a redundant step, but it allows the call to be translated into something more useful. Instead of the interface having to request information on the current bank and then using that to request PyGame to play an audio clip, SoundFloored can maintain the bank state and when asked to play a clip in a given position, load the current bank and then send the specific clip to PyGame. It also helps with decoupling as specified earlier; if I wanted to swap out PyGame I would only need to modify the internal implementation of SoundFloored logic (as long as I left the interfaces in place).


The source for SoundFloored is hosted on GitHub and uses the MIT License so feel free to read it, use it, change it, whatever!

This blog post won’t go into too much detail on the exact code that I wrote since you can check out the source if you want to see specifics. Instead, I’ll discuss the general code structure, execution flow, extra features I added and issues I had with implementing parts of it.

Code Structure

I tried to keep the project layout relatively simple while also keeping it logically separated. As such, the general structure is:

  • soundfloored folder
    • music_logic module
    • interfaces folder
      • gui_interface module
      • keyboard_interface module
      • rpi_interface module

This has worked well so far and means that I only need to import the modules that are relevant. I could have broken out the code into many further modules such as extrapolating Settings, extracting the various enums and more but the ease of use and the likelihood that anyone would find modules that granular helpful is pretty low for this project. If the codebase expands further in the future that may change, but for now simplicity is key.

Execution Flow

The general structure of the application is as follows:

  1. is executed
  2. Settings are loaded from settings.ini and used to populate a Settings object
  3. A MusicLogicSettings object is created using some of the values from Settings*
  4. A MusicLogic instance is created using MusicLogicSettings. In the initialisation, audio clips are loaded and stored as PyGame Sound objects inside Bank objects (stored in a list called banks on the new instance of MusicLogic)
  5. The chosen interface is selected based on the corresponding setting in Settings and loaded using a dictionary that maps the potential setting strings to the interface implementations**
  6. The MusicLogic instance is passed to the constructor of the specified interface
  7. The interface’s start method is called, which tells the interface to start listening for inputs and sending them to MusicLogic as required

* This is done to decouple reading the settings.ini file from the chunks of data needed by specific parts of the application. If I passed Settings through to MusicLogic, it could easily turn into a situation where a large collection of settings are being passed but never used. As well as that, keeping the objects distinct means that settings can be split across multiple files in a more logical fashion far easier.

**I really like this code snippet; it turns a set of if/elif/else statements into an implementation that only requires an additional enum value and a new entry in the dictionary for each new interface:

class Interfaces(Enum):
    KEYBOARD = 0,
    GUI = 1,
    RPI = 2

interface_dict = {
        Interfaces.KEYBOARD: KeyboardInterface,
        Interfaces.GUI: GuiInterface,
        Interfaces.RPI: RpiInterface

        interface_enum_instance = Interfaces[settings.interface.upper()]
        interface_class = interface_dict[interface_enum_instance]
        logging.debug(f"Creating instance of {interface_class.__name__}")
        interface = interface_class(music_logic)
        logging.error(f"Could not load interface {settings.interface}")

Extra Features

Originally I was only planning on writing the bare minimum for this stage, but I found myself waiting on components that I needed for phase two and really enjoying writing the code for SoundFloored. As such, I sort of ran away with it and ended up adding extra features as I thought of them. Examples of the new features that I wasn’t originally planning on are:

Repeat Styles

Repeat styles are used to select the behaviour when pressing the same button multiple times. At time of writing there are two: STOP cancels the current clip on that channel and RESTART moves playback on that channel to the beginning of the clip again. I added this largely because the default behaviour when I first starting putting the project together was what is now RESTART which got so annoying so fast when I wanted to test out some changes or fixes I was putting together. STOP is now the default since it’s more likely to be what I want when using SoundFloored, but I’ve added in a setting that controls which one will be selected on startup along with methods to change the repeat style during execution.

Automatic Clip Repeat

Automatic clip repeat is a feature that allows for audio to continue repeating if the input is still firing when the clip finishes. This is actually a feature that is controlled by the interfaces themselves (since some inputs might not have the concept of being held or constantly firing) but to support it I only needed to add the is_distinct_trigger parameter to the interface of play_clip and change how I was calling it (generally when an input is first fired with is_distinct_trigger set to True and then ongoing while the input continues to fire with is_distinct_trigger set to False).

Implementation Difficulties

For the most part, implementation wasn’t too painful; PyGame did most of the heavy lifting (especially when it came to playing multiple clips at once) but there were a few parts that weren’t quite as easy.

First of all, trying to get a Tkinter interface using first the grid layout and then pack was quite painful; it’s been a long time since I’ve had to work in the wonderful world of graphical interfaces without the helping hand of HTML/CSS or a visual designer. Figuring out how to get all of the buttons placed on the screen in the way that I wanted and calling the correct logic was super not fun and has probably produced the only code of this project that I’m really not happy with (pre-emptive apologies to anyone with even a modicum of Tkinter experience if you look at that interface; it’s not pretty).

Secondly, one of the biggest frustrations was trying to add the automatic clip repeat feature (described in the “Extra Features” section) whilst still correctly handling repeat styles. Since play_clip could now be called not just to play a clip but to pause it, restart it or do nothing (depending on a number of factors) the logic began to get a bit messy. The current implementation of play_clip certainly isn’t super easy to follow, but I opted to make it as readable as possible while avoiding too many duplicate calls.

if repeat_style == RepeatStyle.STOP:
                if is_distinct_trigger:

                    if is_busy_channel:
                        self._play_channel(position, clip)
                    if is_busy_channel:
                        # Drop any requests that come in after manually stopping a channel
                        # until it has been manually started again (to prevent a channel from
                        # starting a split second after being stopped from the same button press)
                        if position in self._manually_stopped_channels:
                            self._play_channel(position, clip)
            elif repeat_style == RepeatStyle.RESTART:
                if is_distinct_trigger:
                    self._play_channel(position, clip)
                    if is_busy_channel:
                        self._play_channel(position, clip)

What Else Do I Want to Change?

I’m sure if you ask any software developer about any of the projects they’re working on they’ve probably got a laundry list of changes, tweaks and fixes that they’d like to make when they get the time. I’m certainly no different! There are a few things that need work still.

Code Refactoring

This shouldn’t come as a surprise to anyone who has been anywhere near a software project, but even only a week or so in there are parts of the application that I’d love to get a chance to rewrite either because of simple reasons (play_clip could be better named) or for more structural purposes (loading songs from disk should be abstracted out of MusicLogic). For the most part I’m aiming for “done” rather than “perfect”, so I’m trying to avoid spending too much time modifying working code so that I can focus on actually moving the project forward.

More Features

This is definitely something ongoing since I’m largely just implementing what features I want as I think of them, but there are definitely more features that I could add. Off the top of my head I came up with the following:

  • Dynamically loading new banks/songs as they’re added to the folder structure during execution
  • Modifying audio clip order manually
  • Hiding/showing banks
  • Pre-configured set-lists
  • A method for handling more configuration options
  • A queue for specifying which audio clips to play next


Any project is only useful if you can actually use it. While I’ve got the benefit of having written the whole thing, adding documentation both inside the code as well as for the project in general would be hugely beneficial to anyone else that wants to use it. I’m sure it’ll even be useful for me once I stop working on it and forget how any of it works!

(It wouldn’t be the first time).

Next Steps

So now I’m in full swing on this project and it’s no longer just conceptual, there’s an MVP (minimum viable product)! SoundFloored is still a long way from done though, with some of the bits I’m most fearing coming up (such as the electronics). So what’s on the horizon now?

1. Start working on phase two (breadboard implementation)

Once the components that I need arrive, I’m going to dive head-first into figuring out how to put together electronics components and get some physical buttons hooked up to my Pi and playing audio clips! I’ve got a lot to learn on that front, so I’m interested to see how it goes.

2. Keep working on the software implementation

I need to pick and choose parts from the “What Else Do I Want to Change?” section that are worth the time/effort and implement them.

3. Continue ordering/purchasing parts for upcoming phases

I’ve already placed a few orders and I know that I’ll need to place more, so I need to figure out what else I’m missing and buy it!

This is the second post in a series on the creation of SoundFloored; check out the third post, this time on the breadboard implementation!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.