Design choice for sound effects
I'm trying to decide how I want to implement sound effects in my program. I've debating between 2 options.
1) Create an abstract interface SoundEffect
and have e开发者_Python百科very sound effect derive from that. Each sound effect is its own class. Upon construction, it opens the sound file and plays, and upon destruction it closes the file. The main drawback I see to this approach is that I'll have a lot of very small objects which would greatly increase the number of files. I could put multiple sound effects in a single header (ones that are related), but I'm not sure.
2) Since the playing of any sound effect calls the same stuff, with the only difference being the file it opens, I could create a single SoundEffect class, with its constructor being an enumerator that contains the names of the sound effects. The class would use a switch to play the appropriate sound.
Obviously I'm debating over an OOP approach vs a more "traditional" approach, and I'm wondering what the best design choice is here. I am heavily leaning towards the OOP approach, but I'm not sure how I want to structure the files. If you have any other recommendations, I'd be glad to hear them.
Sounds are data, the processes of play a sound uses a system resource (the sound card) that most machines have only one of. It's normally more than a little complex to talk to the sound device although if you use an api, it can appear to be simple.
So it doesn't make much sense for sounds to know how to play themselves. They would end up fighting for control of the single resource that is the sound device.
if you must use classes. Then you should have a class that represents the sound device that you want to play to, and separate classes that represent things that can be played.
Personally, would skip wrapping sound effects with a class, data is data, there really isn't any need to make EVERY piece of data have methods as well. It is possible to overuse classes you know.
Wrapping the sound device in a class makes a lot of sense though, it allows you to abstract from the rest of your code the specifics of how the sound API that you use works, as well as localizing the code that allows you to choose which sound device to use if there is more than one.
If i understand that right you are hard-coding the sound effects for all possible sounds?
That sounds wrong, you create different subclasses for differing behaviour, not for differing data.
If you have certain sound effect types that need preprocessing of the data, subclasses make sense - if the project is bigger, you might want to seperate effect handling code and effect parameters so you can change effects without rebuilding the application (e.g. FMOD seperates coding and sound design).
For playing different sound-files just let the class' constructor take the path or some resource id for the sound file - there is no switch
needed here.
If you're dealing with a large number of sound files that are used repeatedly, a pool based approach would be useful to avoid reloading of files every time you play them. One idiom for that is the flyweight pattern (see e.g. Boost.Flyweight for an implementation).
精彩评论