- Back around the turn of the century my wifi card died when I was reinstalling my system and I had no money for a new wifi card, no internet at home so I ended up with a very a basic console only Arch install, only audio software I had installed was SoX. I started out using SoX and Bash to make music, explored Lame's ability to encode anything as an mp3 and eventually discovered what TFA talks about. I never made anything I would call good, it just is not a method all that compatible with my interests but it has stuck with me all these years and has left me feeling that much of computer music has stagnated (in method, not output) and we have a great deal of room to explore yet.
Stagnated is not quite the right word, I think what computer music has been doing in the last couple decades is establish its primary instruments and techniques, the various audio DSLs, which is a fairly important thing musically speaking, it builds the culture and repertoire. Computer music is strongly rooted in how the musician interacts with the code, it is the strings of their guitar and I think we have barely touched on exploring that relationship yet. What is the prepared piano of computer music? how do I stick a matchbook between the strings of the code or weave a piece of yarn through it?
I hope more go back to exploring these very basic and simple ways of generating sound with computers and start experimenting with it, there is more out there than just ugens.
by bovermyer
1 subcomments
- If you tell me about sound, and describe sound, and speculate about sound... give me sound.
It's a small thing. But if you're going to say you have something to say about sound, give me some sound to demonstrate your point.
- I got this knowledge really too late, but recently I've learned how the music is (was?) made on old computers like Atari 65XE or NES (the same processor 6502). The amount of work specified in the article above, is increased by the calculation of vsync of the monitor used, and correlating it with the sound frequency. This leads for example to the same game playing in different tonation on PAL and NTSC. Today it's already obsolete, but the emulator still has to be emulating the one or the other standard, to comply with the code. Today we have great privilege to abstract the sound from the monitor sync (by OS) but this is not the case in some embedded devices.
- Discussed at the time:
How to create minimal music with code in any programming language - https://news.ycombinator.com/item?id=24940624 - Oct 2020 (78 comments)
by felineflock
0 subcomment
- Bytebeat is kinda cool:
https://dollchan.net/bytebeat/#4AAAA+kUli10OgjAQhK/Ci3R3XXTb...
- > that’s why CD music had a sample rate of 22000 Hz. Modern sound cards however tend to use sampling rates twice as high - 44100 Hz or 48000 Hz or even 96000 Hz.
Not exactly the point of the article, but this is all sort of wrong. CDs use a sample rate of 44.1 kHz per channel, not 22 kHz. I'd hazard this cuts down on rounding errors from having only one sample per 22kHz range. DAT used 48 kHz I believe to align evenly with film's 24 frames per second. 96 kHz is commonly used for audio today, and the additional accuracy is useful when editing samples without producing dithering artifacts within human hearing range.
- no audio sample on the webpage?
- (2020)