Here's a proof of concept where two or more people contribute tracks to a project from two different physical locations. I will show how to do set this up at the lower level. This can all be accomplished without any centralized servers. What's needed are three things:
1. Network configuration
2. Multitrack recording software
3. Source control
Also a simple Instant Messenger can be used for each person to know what's state the project is in.
1. Network configuration
2. Multitrack recording software
3. Source control
Also a simple Instant Messenger can be used for each person to know what's state the project is in.
1. Network Configuration
To get machines to communicate directly with each other, first we need to know their IP addresses. Actually, there's usually more than one IP address we need. To get the external IP address that the world sees, the easiest thing is to go to a site like
http://www.whatsmyip.com
This is the public IP address that would need to be given to all the contributors of a project.
Another way to get this IP on Linux, I added this to my ~/.profile file:
alias whatsmyip="wget -q -O - checkip.dyndns.org|sed -e 's/.*Current IP Address: //' -e 's/<.*$//'"
Then I can just run this to get my IP:
whatsmyip
Now, usually all machines on a small network will share the same external IP address. Inside the network, they are usually given a different "internal" IP address. The problem now, packets coming from the outside will need to be routed to the particular machine with the project files.
This is configured in the router, and will be different for each router brand/make/model. It may be called DMZ (after demilitarized zone), port forwarding, or under Applications/Games settings.
Ideally you can forward network packets to a static mac address of the computer (a mac address won't change).
Though my router only allows routing packets to an internal IP. My home network is DHCP (which is standard configuration, for a wireless router plugged into a cable modem). It's the easiest way to set up a network at home. If I plug my computer into the router, it is assigned a *dynamic* internal IP. But this also means if I unplug my machine, the internal IP's addresses can change and reset when plugging them back in to the network. That makes it a problem if you always want the same packets going to the same internal IP address.
There's a couple approaches around this.
The simplest: once a machine is plugged in, it can just be left on. The IP shouldn't change unless it's unplugged. And even then, it may not.
There's a couple approaches around this.
The simplest: once a machine is plugged in, it can just be left on. The IP shouldn't change unless it's unplugged. And even then, it may not.
Also, you can have static IP's on a DHCP network. One way to do this is to set the DHCP address of the machine directly when it boots up. For example, on Linux:
ifconfig eth0 ip.addr.goes.here
To make this run automatically on every boot, drop this line in /etc/rc.local
So let's say my router is at 192.168.0.1, and assign a static IP to my machine on the same network:
ifconfig eth0 192.168.0.100
Then in the router, I would forward all internet traffic to a port to this address 192.168.0.100. This setup will be different for different routers. In my Linksys wireless router, I went to the web interface, which by default was at http://192.168.0.1 and found this under "Applications & Gaming" Let's say we are forwarding all traffic going to port 8000. Route this to the machine at 192.168.0.100.
Overall, whenever someone makes a request to the external IP, that request needs to reach the particular machine on your network. It's best to only expose the particular port(s) you will need.
The network routing will look something like:
internet traffic: hits external IP on port 8000 (public IP)
actually goes to: 192.168.0.1 (router)
which fowards to: machine 192.168.0.100 (machine with project)
which fowards to: machine 192.168.0.100 (machine with project)
Also for security, I would say to be sure you changed the user/password on the router, if you are exposing any ports or machines in your network to the internet. If anything doesn't work, it's probably going to be at the routing level.
2. Picking a multitrack recorder
For free software, I think Audacity or Traverso are good picks for collaboration. Audacity and Traverso are both simple and run on Windows, Mac, and Linux. The only point I'd say they are lacking, is in supporting real time LAPSDA effects. Still for the basic task, it's perfectly usable. Any software should work though, so long as everyone has it, and is using the same version.
3. Picking a version control system
The simplest method of sharing files and collaborating is to FTP files up to a central server. For simple and small projects, that's probably the best approach.
Although with multiple contributors, this has some pros and cons long term. What happens if a file gets corrupted? Or two people edit the same file, and the last person overwrites the changes without realizing? Or someone loses a change, and needs to "undo"
Version Control Systems are like FTP, but add several things:
(1) allows an unlimited number of "undos" every time the files are "saved." Like a running backup.
(2) source control protects against one change clobbering another.
(3) can automatically merge changes made to two locations of the same file
Looking at different systems for large binary files, I've ranked the best for this task IMO:
mercurial
git
bazaar
svn (requires central server)
IMO Mercurial may be the best pick here (though I haven't used it much). Mercurial has good support on all major platforms and works well with large binary files. Git should also work ok.
A very short mercurial tutorial: http://linux.com/feature/121157
Mercurial has four ways to publish files for other users:
1. over a file system (the directory IS the repository)
2. a built in web server
3. cgi (with apache)
4, ssh
Probably the easiest is option 2. It's not the most robust only supports one connection at a time. But it's very lightweight and easy to set up. The best option for heavy use, is probably 4.
To create a new repository on the main machine 192.168.0.100 (which is has a port exposed to the internet)
cd project_dir # navigate to the new project dir
hg init # create the repository
hg add # add all the new files
hg commit -m "some message" # commit (or okO them to the repository
hg serve # set up a mini web server on port 8000 (default)
# this will run untill you kill it.
hg is the chemical symbol for mercury :)
Now you can allow this process to run untill the other person is synced up, and kill it when you are done. No real need to have it always running. On rare occasion, something might already
The other person can pull files down with:
# pull a copy of your project down from the address
# http://ip:port
hg clone http://your.extern.ip.addr:8000 project_dir
After adding new files to the project, they can run:
hg add # add all their new files
hg commit -m "another note" # commit (or ok) the changes to the repository
hg push http://your.extern.ip.addr:8000 # push changes back to the repository
Note this will not delete files as they are deleted from the system (as in cleaning up unused sources, although it wouldn't save disk space any way, since source control will still keep a versioned record of them.
Likewise, when you add more to the project, you can just do:
hg add # add all the new files
hg commit -m "explain changes" # commit them to the repository
# no need to push ... your directory is the repository
If they want to get additional changes they use:
cd project_dir # navigate to the project dir
hg pull # get all the new files
hg commit -m "another note" # commit (or ok) the changes to the repository
hg push # push changes back to the repository
Running into an ssl problem
By default push requires ssl. You can start the service with
hg serve --certificate
On a trusted network, or lower security testing, you can use edit this file (in the repository that will be accepting data):
.hg/hgrc
Add this configuration to allow files from everyone, over http
[Web]
push_ssl = false
allow_push = *
For a little more security around accepting data, limit by username:
allow_push = frodo, sam
deny_push = gandalf
And turn off the server when you're not expecting a file. Probably this is pretty low risk, if you are just trading audio files back and forth (like sending files over email).
Though, if there's a lot of file sharing I'd recommend spending time to set up https or ssh. This guide is more or less a bare-bones approach of setting it up, and deciding if it's workable.
Hopefully that explains it all ... the only problem is getting these components to work together in a way that is easy for the end user. :)
mercurial book:
http://hgbook.red-bean.com/read/a-tour-of-mercurial-the-basics.html
4. Making it easy to Use
All the pieces exist, but it's just a matter or hooking them together, and making it easy to use. Which IMO means the user interface for the multitrack recorder should have a couple standard options:
Open, Save, Save As, ...
But also the file menu could have options like:
Create Shared Project
Checkout Shared Project
Commit Shared Project
When setting up a local repository, there could also be options to limit users/IP's.
Then, instead of opening up a project file on the local machine, you could give it a URL where a project repository exists. "Checkout" pulls the changes down from the server, and "Commit" pushes up new changes.
Network configuration is about the only thing that couldn't be automated, unless there were a centralized server that everything routed traffic through.
4. Making it easy to Use
Open, Save, Save As, ...
But also the file menu could have options like:
Create Shared Project
Checkout Shared Project
Commit Shared Project
When setting up a local repository, there could also be options to limit users/IP's.
Then, instead of opening up a project file on the local machine, you could give it a URL where a project repository exists. "Checkout" pulls the changes down from the server, and "Commit" pushes up new changes.
Network configuration is about the only thing that couldn't be automated, unless there were a centralized server that everything routed traffic through.
2 comments:
can you record live at 2 locations onto 1 recording?
Yeah, that would be ideal. I had thought a bit about allowing real time recording, but wasn't sure how to work around network latency. One person will hear a beat a split second after it occurs, on the other end of the wire. I wasn't sure how well the tracks could be synced. I suppose though, as networks get faster, this problem goes away on its own.
I had started on a project in Java a while back that would allow this, but at the time the Java sound libraries weren't up to snuff at the time.
The setup I described wouldn't allow live recording from two sources.
Post a Comment