Google Reader died in 2013, a moment that many Very Online rue. Google killed a service that was able to connect people easily to all of the information sources that they wanted to follow, and even if there wasn’t a social network, or monetizable ads, or anything like that, it was a very useful service. When it went away, all that was left for people to follow news was social media. And we all know how Facebook, Instagram, Twitter, and more have gone as sources of verifiable information.
But now, with the looming downfall of Twitter and the refocus of Facebook on the metaverse, there is again a space for people to curate their information space in a different way. And for me, at least, I’ve been going back to that which I left when Google Reader shut down: RSS feeds.
The joy of RSS fees is that, primarily, they are a private and personal way to consume information. You get to select the sources that you want, and then you get to consume them in your reader without seeing all of the commentary from the uncle you haven’t seen in two years, brigading misogynists and homophobes, crypto-reply guys, or anything else. It is quite a refreshing experience to sit with the information that you’re interested in by yourself; it is almost like curating a personal magazine for yourself that you get to pop into whenever you want.
RSS is a technology that is still built into most website across the internet, even if it is not as prominent as it once was. And it is still the primary technology behind podcasts, even if Spotify is trying to make that more proprietary. But just an RSS feed by itself doesn’t do you anything: you need a feed reader or feed aggregator to bring all of your sources and interests together into one place. After the downfall of Google Reader, a bunch of different options popped up, but none has ever reached the singularity that was Reader. However, there are a bunch of good ones out there, depending on what you want.
Feedbin is the one I currently use, and I like it because it the couple of different ways that people are mainly subscribing to text content these days. In addition to just pasting in an RSS feed yourself, Feedbin can go find the feeds for any website you want to follow. But the killer feature that I love is the fact that they give you a bespoke email address so that you can put all of the newsletters to which you subscribe in the same place as your RSS feeds. Having one place for both of these has upped my use of newsletters substantially, as has the fact that it gets them out of my regular email inbox. Don’t look at my inbox, it’s a mess.
I previously used Newsblur, which is also a very similar experience to Google Reader; however, the pace of updates to that seems to have slowed, and when I left it was getting crufty around the edges. Tiny Tiny RSS was another program I used, and it was even closer to the exact Google Reader experience; the catch here, however, is that this is a self-hosted application, so you have to be used to deploying a server and docker containers. Feedly is the most popular feed aggregator out there, and I know a lot of people like it; I just haven’t used it personally.
The downside of RSS is that is another inbox to maintain, like your email, like your to-do list, like a bunch of other things. But if you want to curate a more precise and personal news experience than any algorithm could ever give you, I suggest you give it a try!
In addition to being a prospect researcher, I have been a desktop Linux user for the past 15 years. Over that time, I have used many of the most popular distributions, such as Ubuntu, Fedora, Arch Linux, Debian, Linux Mint, and more. However, right now I am back on Ubuntu, which is probably the most popular linux distribution in the world right now, especially when it comes to people using it as a desktop operating system.
However, the fact that they are a UK company allows me to also look into their corporate filings, even though they are a private company. Unlike private companies in the United States, who aren’t required to file anything with the SEC, private companies in the UK are required to at least file an annual statement with Companies House. Companies House is the business entity registration agency for the UK, and while they don’t provide the same kind of detailed information that the SEC does about US public companies, they provide (infinitely) more information about private companies.
So, what kind of information can you find in Companies House? Let’s take the example of Canonical, the company that makes my favorite Linux distribution, Ubuntu.
Like US private companies, UK private companies have a complicated web of holding companies and shell companies that mask ultimate ownership; this means, that when you search for a company in Companies House, you may get a lot of companies with similar names, or old corporate structures, or things like that. However, once you get into the actual annual filings, Companies House requires that the company state their ultimate controlling organization. This can be a big help in trying to unravel the web of holding companies and trace things back to their ultimate owners.
For Canonical, the makers of Ubuntu, their primary operating entity is Canonical Group Limited. Canonical filed their 2020 annual report a week ago, and so we can now see how this Linux and services based company did as compared to 2019.
The beginning of the annual report is a description of the principal activities of the organization, much like an annual report of a US public company. For Canonical Group Limited, they describe their prinicipal operations as the “sales, engineering and support of Ubuntu services provided by the companies within this group.” Those companies include all of Canonical Group Limited’s subsidiaries, which include those in the United States, China, the UK, Canada, and Japan. They also get into detail about the operations and products of Canonical, which includes Ubuntu on OpenStack, KVM, Kubernetes, and more.
After that description, the annual report gives the top-line accounting numbers for 2020 versus 2019. For Canonical Holdings Limited, the parent company of Canonical Group Limited, their revenue went up from $119M to $139M, and their average number of employees throughout the year went up from 473 to 505.
Four years ago, the company went through a large restructuring, firing dozens of employees and shuttering development on the Unity display system. The fact that both their revenue and their employee count is growing is encouraging for the long-term sustainability of the company, as well as the fact that they are profitable. Currently, the company is not worried about their operating profit number; they say that they want to take any profit and reinvest it in the company, growing the headcount of employees and investing in research and development to maintain their position in the market.
Finally, on the last page (page 33), it states, as everyone who watches Canonical knows, that he ultimate controlling party is Mark Shuttleworth. But, knowing that, we can go back through financial statements and see what that means. In one spot, it says that Canonical Group Limited has been loaned $89M from the controlling entity, and that they have no other loans from outside the group. That means, as of right now, Canonical owes Mark Shuttleworth and the rest of his associated businesses that amount of money. That number is down from previous years, and doesn’t represent the total investment Shuttleworth has made, but it does show that he continues to invest his personal fortune significantly into the company.
This is just an overview of the 33 pages of information that Canonical, and other UK private companies have to file each year. Like I sad at the outset, it is a lot more information than US private companies have to file, and they do give information about the web of holding companies that are in play. If you have prospects who own companies in the UK, you can use Companies House to find plenty of information to do research for your non-profit organization.
Starting in January, beer drinkers around Richmond noticed that the beers of Bell’s Brewery, known for favorites such as Oberon and Two Hearted Ale, where no longer appearing on the shelves of grocery stores or at restaurants. Later, in February, we learned that the loss of Bell’s was not just limited to Virginia, but also extended to the rest of the Commonwealth. The legal arguments have centered on the amount of information that Loveland, Bell’s distributor in Virginia, was supposed to provide the brewery as a part of being purchased by Premium Distributors of Virginia; however, the scuttlebutt in the industry theorizes that another cause may be the parent company of Premium Distributors of Virginia, with whom Bell’s has previously quarreled.
The initial decision has finally come down from the Virginia Alcoholic Beverage Control Authority this week, and the order from the board compels the sides to mandatory arbitration to work out how much information is due to each party in this action. Since I am not a lawyer, just a person who likes to use the Freedom of Information Act, I have embedded the PDF that I received from the ABC below for all of your to sort through the legalese. Enjoy!
Finding a good place to host your podcast can be a struggle; the struggle is doubled if you want to find a place to do so for free. I have been producing Filibuster, the premier podcast about D.C. United, for the past 4 years, and in that time we have switched hosting providers twice (and may be doing so again soon with the problems that are coming out at Soundcloud).
But you need two main things to successfully host a podcast:
- A place to store the file
- An RSS feed to put into Apple Podcasts, Google Play Podcasts, Stitcher, etc.
Since our podcast is licensed under a Creative Commons Attribution-Share Alike license, I have also been uploading it to the Internet Archive as a backup storage solution and as a hedge against what might happen to Soundcloud or other hosting providers. Once you reach 50 episodes on the Internet Archive, you can contact them and have them create a collection for you. This gives you a unified place for your episodes and, more importantly, and RSS feed.
However, until you get to 50 episodes, or if you prefer to have finer control, there is a free workaround to getting a separate RSS feed. Simply create a free WordPress.com blog for your podcast, and include a direct link to the MP3 hosted on the Internet Archive (I chose to do it as a link to directly download the episode). I also embedded the Internet Archive’s web player into the post. I created the above blog as a proof of concept, and it works. You can then take the RSS feed for the blog and put it everywhere it needs to go. This may also work on other blogging platforms, but I haven’t tried.
Paid hosting providers can provide a lot of features, such as automatic submission to Apple Podcasts and other locations, statistics, mobile apps, and more. And if those are features you want, there are plenty of great hosts out there. But to start and sustain a podcast, you don’t have to pay if you don’t want to.
As I talked about in my letter to the Governor of Virginia, Terry McAuliffe, and the Secretary of Education, Dietra Trent, the Library of Virginia has been forced to make massive and devastating budget cuts, which account for 12% of their total staff being laid off. The other shoe has now dropped: the Library will now only be open four days a week, Tuesday through Friday.
Critically, closing the Library on Saturday will be the most painful for the general public, since that is the day that people who work are able to get there and do research. The only statement that the Library made, to the Richmond Times-Dispatch, was
“Suspending our Saturday hours and closing our reading rooms on Mondays is heartbreaking for us, but is necessary,” Sandra G. Treadway, Librarian of Virginia, said in a statement issued Tuesday morning.
Other services that will be cut include “training for records officers as well as longer wait times to fill orders for digital images and to make new collections available.”
Digitization. Processing. Reference. Training. All aspects of the Library of Virginia’s services are being affected and, make no mistake, it is crushing blow.
Below is a letter that I wrote to Governor Terry McAuliffe and Secretary of Education Dietra Trent about the proposed layoffs at the Library of Virginia. 26 state employees have to be laid off state-wide, and out of that number 15 are currently proposed to come from the Library of Virginia. It is unfair, inequitable, and devastating to an agency that has born more that its share of layoffs going back to 2002.
I urge you to contact the Governor and the Secretary using the form on their website and tell them why you support the Library of Virginia and why these cuts are completely unfair.
Dear Governor McAuliffe and Secretary Trent,
I strongly urge you to reconsider the layoffs of 15 people from the Library of Virginia out of a statewide layoff total of 26. The first issues is the issue of fairness. No one wants layoffs to come from their agency, but the fact that the Library of Virginia is bearing over half of the layoffs for their entire state government simply isn’t fair. Spreading them across the Executive Branch helps minimize the impact to any one agency; if things are allowed to stand as they are, the programs and services of the Library will be gutted.
But secondly, more importantly, is a message that you may not have heard much, but that is very, very true: the Library is a critical and key part of state government and our Commonwealth and needs to be protected and grown. The Library performs critical services for the Commonwealth, such as records management, the preservation of Virginia’s historical record, and educating students and the public about the importance of primary sources, among many other things.
The records retention schedules created by the records analysis team and the records collected, preserved, and made available by the archivists at the Library of Virginia are the cornerstone of open and transparent government here in the Commonwealth. The records at the Library have been used in court cases, by members of the media, by citizens of the Commonwealth, and by members of your own administration to review the day-to-day actions of the government. The Library is also one of the first in the country to publish, en masse, the emails of a gubernatorial administration with the emails of Tim Kaine now available online. In a day where emails are constantly in the news, this project shows what an archive can do. All of this is threatened by these potential layoffs.
This last point, about the emails of Governor Tim Kaine, also illuminates a larger point. With the records of state and local government becoming primarily digital, rather than paper, the human and resource cost for appropriate records retention, long-term archival storage, and processes to make these documents available just like paper records is skyrocketing. We need more records managers and archivists to deal with this new future, not less, and the Library needs more resources to make sure that there isn’t a massive gap in the historical record because we aren’t able to deal with digital archives.
The Library also does dozens of educational programs every year, ranging from teaching school-aged children about the importance of primary sources, to genealogy workshops to Virginia residents of all ages, to book talks given by important authors from Virginia and beyond. These include events requested by members of the General Assembly during the legislative session, which is a perk that the members get to give their constituents who are able to come to Richmond. This is an important educational experience for Virginians, both young and old, to learn about primary sources.
If these cuts are allowed to take place in the way they are currently constituted, you will be doing irreparable harm to the historical record of Virginia, the openness of government in Virginia, and the education of the citizens of Virginia. I urge you to spare the Library of Virginia, which has already been hit with devastating layoffs in every round since 2002, and spread these layoffs more equitably across state government agencies. I look forward to your response.
I’ve been trying to build a home file server, mostly to store backups, using a Raspberry Pi and a Western Digital My Passport Ultra as the main storage unit. For years, I used an old Compaq desktop tower as my backup computer, but the fact that it won’t turn on and the fact that it would cost more to fix it than to just buy a Raspberry Pi has lead me down this new road.
But after I bought the My Passport Ultra, I tried to plug it into my laptop, running Debian Sid. It would mount, but I could not access it through the file manager or on the command line. At first, I did what any good Linux user (or librarian) would do: I googled around for an answer. According to everything I read, the encryption on the My Passport Ultra required a Mac or Windows computer to decrypt, and even then you would still have a vestigial piece of their encryption on the drive.
I first tried to use their decryption software using Wine; that didn’t work because it couldn’t find the drive, even though it was plugged in and I had used winecfg to make sure the drive was discoverable. I then tried to use my wife’s old Mac, but quickly remembered why she doesn’t use it anymore and why I got her an Android tablet for Christmas last year: every 20-60 seconds, it would shutdown and reboot, so I didn’t ever have time to try and even download the decryption software.
However, being a Linux user, I decided to just try stuff. So I plugged the drive back into my computer. GParted would not run and would not recognize the drive, so I couldn’t format it that way. However, I finally found a solution, and a simple one at that: I unmounted it and then just ran the most basic formatting command out there.
sudo mkfs.vfat /dev/sdb1
Completely blew threw all of the My Passport Ultra’s supposed encryption (which I think was just software encryption, and nothing on the drive itself was actually encrypted) and made the whole thing completely usable by me. I later formatted it into btrfs for use on my file server, and it is now receiving an rsync of all of the pictures, music, and files from my laptop. Since there is so many threads out there about how it isn’t possible to use this drive on Linux without freeing it on a Windows or Mac computer, I figured I’d write this up so people know that yes, you can do it just on Linux.
A couple of years ago, I hosted this website and a couple others on my first-generation Raspberry Pi. However, it didn’t quite have enough power to even support my light usage, and so I went over to DreamHost (started with a very cheap promo deal, which went up to $120 in my second year). This year, my second renewal at the $120 level was coming around, and I thought that that was a lot of money to spend on my website when I have the skills to host it myself.
In the intervening years, I had purchased a Raspberry Pi 2, and it really is a great option to host a website on. With 1GB of RAM and a 900 mHz ARM chip, the power that you’re getting is actually fairly similar (or even better) to what the lowest tier paid hosting sites are giving you. With that in mind, I went back to my Raspberry Pi to get it going and replace my paid hosting with a computer that sits behind my television.
The first thing that I did was to download Raspbian; it is the primary supported Raspberry Pi distribution, and I have a long history with Debian. I did make sure to disable the graphical user interface since I don’t need that on a server and so it runs with a little less overhead. Debian stable is always a great base off of which to build a server, and the current version of Raspbian is built on Debian Jessie. I’ll leave it to the documentation of Raspbian and Raspberry Pi themselves to tell you how to install your system.
I’ve wanted to try out using nginx for a while, but with a time crunch before my DreamHost payment was due, I just went for the old standby: Apache. I can configure Apache in my sleep these days, and so it went quickly and easily.
After doing an “apt-get install apache2” and a “a2enmod rewrite,” you should be ready to create you site configuration file. In the past, I used to struggle to to find the right settings to make the permalinks pretty, but I’ve finally found the right site config to make either simple.
Copy that template into your /etc/apache2/sites-available folder, and name it something like yoursitename.org.conf . Change all of the information to match your site, and then run “a2ensite yoursitename.org” to activate it. You’ll also need to run a “service apache2 reload” to get Apache going.
You’ll need to put your whole site in the document root found in the site configuration file above. You can hand write HTML in there, or you can go for a full CMS like WordPress, Drupal, Ghost, or many, many others. Once you’ve put the files there, I recommend changing the owner of the files to the www-data user; it helps provide some security should your site be attacked. “chown -R www-data /var/www/yoursitename.org” should get that done for you.
On this site, I installed MariaDB (a drop-in replacement for MySQL) and then WordPress, but your choices are endless. I have two sites running on a single Raspberry Pi right now, with a third coming shortly; a “free -h” shows that I’m using 182 MB of memory right now.
IMPORTANT UPDATE: Now, to get your site viewable on the larger internet, you have to get your DNS settings straight. Go to your domain name registrar (I use Hover), and go to the DNS tab. Find out what your public IP address is: you can either do it by logging into your router and poking around in there or by Googling and going to one of those sites that tell you.
I was not really able to get IPv6 to work by itself, so I added both the IPv4 and IPv6 address to my registrar’s DNS record. You put your IPv4 address in an A record, and the IPv6 in a AAAA record; I just left the hostname part blank and just added the addresses. Once you save those it should take about a half hour or an hour for the new location of your address to populate to all the DNS servers around the world, and then typing in “yoursitename.org” should actually take you to your site.
Your public IP from your ISP may change from time to time, so if your site is suddenly not working check this first.
A Raspberry Pi 2 is a pretty good option for hosting a fairly-low activity site, like a your personal resume, personal website, or a website when you’re just starting out and don’t want to pay for hosting.
This is the text of a presentation as written that I gave at the Fall meeting of MARAC in Roanoke, Va., on October 9, 2015. I adlibbed some, but it is pretty close.
My name is Ben Bromley and I am a development research analyst with Virginia Commonwealth University. While this may seem to be a left turn from traditional archival jobs, it is another job in the greater field of information management. The training that I received in library school and on the job as an archivist gave me the qualifications needed to get a job in prospect research and management.
Intro to University Development: We all know that universities and non-profits are reaching historic lows in funding from state and federal governments. Cutting public support has been the trend over the past decade, and there doesn’t seem to be much sign of it abating.
For these institutions to continue to provide the same level of services they have in the past (or, maybe some day, to start adding new positions and expanding services), private philanthropy is needed. Embedded in the world of private philanthropy is a job that often calls for applicants with library and information science degrees, and one that my archival training prepared me for very well: prospect research.
Prospect Research and Prospect management overview (history of?): The goal of prospect research is to identify new prospects who might be interested in donating to your organization and then finding the best match for their interests at your organization. In short, we try to find out the potential prospect’s affinity to our organization and then also need to estimate the capacity that they have to make a major gift.
The formalizing of prospect research as its own field is fairly recent, with APRA, the professional organization for prospect researchers, only being established less than 30 years ago.
There are many different places that a prospect researcher can work: many larger organizations, such as universities, will have prospect researchers on staff; some smaller organizations may rely on individual freelancers or consultants, and there are vendors who, in addition to other services, hire prospect researchers to work on their side.
The work is often collaborative, working not only with the other staff in your office but with the development officers and other University employees.
Research: The part of the job that will came most naturally to me and that will, I think come most naturally to most of you, was research. The most visible product of the prospect researcher is the research profile. Insteading of writing a biographical or historical note on a person or organization as part of a finding aid, we are writing a similar biography of a person or organization for the use of a development officer.
Just like archival description, the types of profiles that we write depend on the needs of the development officers. Some development officers will only want a brief profile, summarizing a person’s current job and how much we think they can give. Other development officers want as much information as we can possibly provide on a prospect so that they can be prepared for any eventuality. We also balance what a development officer thinks they need with the time we have available and the tasks we are doing for the rest of the institution.
The information that we typically provide is a summary of a person’s professional career, their personal relationships, and their non-profit giving and affiliations. And just like a bioghist note, not everything that we find is going to go into a profile. Even though we are doing research on people using publically available information, people still have a right to privacy; that right is compounded when we bring all of this information together from multiple sources into one document. We make sure to include what the development officer needs to know, and leave out other stuff.
One of the key differences between research and traditional archival description is right there in my job title: along with presenting the information, I also analyze the information and make conclusions based on my instinct and the available information. Factors such as the person’s relationship to the institution, previous history of acting philanthropically, interests and hobbies, and a myriad of other information is taken into consideration when evaluating a prospect. Just because they have plenty of money does not mean that they actually give money, or are actually willing to give money to your institution.
This analysis comes into play most when we are trying to estimate the giving capacity of a person. Obviously, we cannot see bank accounts, stock holdings, or anything like that. The resources that we use (and I will get to those in a minute) can only find publically available information. So, estimating someone’s giving capacity requires some investigation, some guesswork, and some analysis. We typically look at what the person’s current and previous jobs are, and how long they’ve been in the field to try and estimate what their current salary might be. We look at their real estate holdings, political and non-profit giving, and giving to our organization as well, and plug all of this information into a formula that is typically standard across the industry. And we take a look at their personal situation and family as well; even if two people look the same through the formula, someone with two young children is probably less likely to make a big gift than someone whose children are grown. That is the kind of analysis that we bring to the table.
Data integrity: Another one of our primary duties as researchers is making sure that the information found in our database of record is accurate and up-to-date. We are the custodians of the database, and the information in it is our responsibility.
We make sure that we have the most up-to-date contact information on entirety of people in our database so that we know that our communications are getting to them. Development officers have a limited amount of time, and can only spend one on one time with people likely to give larger gifts. However, universities still want to stay in touch with their entire alumni base, since any money given is sorely needed and people giving small amounts now may eventually give larger amounts in the future. While we are making sure we have their updated contact information, it is helping the university lay the groundwork for the next generation of donors.
On a more practical note, we also want to make sure that we know, for example, when people pass away so that their spouses do not continue to get mailings or emails.
Prospect management was the area that required the most learning for me as I made the transition from archival work to prospect research. Each development officer carries a portfolio of prospects with whom they are working to try and secure gifts to the institution. Some are proactive about adding and dropping prospects, but often they need our help in doing so. We want to make sure the development officers have the right prospects in their portfolio, so we help the identify new ones and help them make the decision to drop a prospect if they are not right for that department or for our institution as a whole.
Another part of prospect management is making sure that development officers document the contact that they have with donors. We want to make sure that we stay out of each other’s way and don’t bombard the prospect with conflicting information. Encouraging and reminding the development officers to document their interactions in the database of record helps prevent this from happening.
So what sources do we use to conduct prospect research? We use a mix of proprietary and freely available databases and sources of information to perform our job.
For biographical research I will use proprietary databases such as lexis/nexis, genealogical resources such as ancestry and familysearch, newspapers, court information, and social media accounts.
For all aspects of estimating someone’s giving capacity, I will use resources such as salary surveys, which are typically published by each industry; property assessment databases, zillow, SEC and FEC filings, and more.
The two hardest part of making the transition from archival work to prospect research are somewhat related: math, and the business and accounting terminology that typically surrounds it. Luckily, as archivists, we are naturally curious, so I now know things like what a charitable remainder unitrust is or how to read a 10-K filing submitted to the SEC.
Special Library Association specifically supports prospect researchers, and they have some good resources on their website. There are also local APRA chapters throughout the country, which have resources and conferences; APRA-Virginia, for example, typically has a one day conference twice a year that is affordable for people to go to. There are also blog posts and other resources online for other archivists and librarians going into prospect research work. I hope I have given you a good overview of how archivists are already qualified to be prospect researchers and feel free to ask me questions in the Q and A or come up afterwards and talk to me if you have any other questions.
Having been the producer of Filibuster – The Black and Red United Podcast, for over 70 episodes, I have gone through a number of methods of recording, most of which have annoyed me in one way or another. People keep asking me, though, how we do this podcast, so I figured I would lay out my tools and all the methods that I have used, and see if any of them work for you. I’ve broken this up into production, hosting, and recording.
First, however, here is the production part which I have always used.
- I edit the podcast in Audacity, which is a great audio editor that works on Linux, Mac, and Windows.
- The effects that I typically use are Fade In/Fade Out (for the intro and outro music), Truncate Silence (to remove long pauses and dead air), Noise Removal (to get rid of background hums and hisses), Leveler (to pump out the volume of the outro after I fade it in and out), and Compressor (to even everything out).
- When everything is done, I export to MP3. Make sure to check your export settings here, because there is never a need for an audio podcast to be 80 MB. I use Variable Bit Rate encoding at quality level 7, which usually gives me a 17-25 MB file, and it sounds perfectly fine.
- Once the MP3 is exported, you are ready to upload it to your platform of choice.
Proper hosting makes sure that you can get your episode off of your computer and into other people’s ears. Over the course of Filibuster, we have used a number of different methods of hosting, but let me just say this now: You don’t have to pay for it, if you’re willing to do a little work.
- Buzzsprout was the first hosting platform we used, and it was your typical paid podcast hosting site, like libsyn and many others. Their cheapest real plan is $12 a month, so that’s not great.
- Through SB Nation, we got a free Pro account with Soundcloud, which is where we currently host the podcast. You can apply for their beta podcasting platform, so that could be a good option for you as well.
- YouTube is always an easy option, even if you’re not a video podcast, but would be difficult to get into iTunes and other podcast providers; not really recommended unless you are video.
- The best free option, as long as you have a blog (and who doesn’t?), is to host your audio on the Internet Archive. You can embed it into your post, which I have done to the very first episode Filibuster. Placing a link (<a>) to the raw MP3 in the page as well will embed the file in your RSS feed, allowing it to be used by podcatchers. The feed for your podcast category is then your podcast feed.
Download the MP3 here
- The catch with the Internet Archive is that they really want you to license it under a Creative Commons license. That’s fine for us, we already license it that way on Soundcloud, so IA would work for us too.
- Some sites recommend running your podcast feed through Feedburner so that you can submit it to iTunes, but I have not confirmed those things.
This is the part that has caused me the most frustration, going back and forth between methods constantly. A note on recording: whenever possible, record to a lossless audio format such as WAV or FLAC; your final quality will be much better if you do so.
- Before I took over the producer duties, we recorded via Skype with the producer using Garage Band to grab all of the audio and then do the editing. Update: Via the comments, LineIn and Soundflower were the programs used to record the podcast on a Mac.
- When I took over, we continued to use Skype to talk to each other; however, since I use Linux, Garage Band was not an option. We started by using Skype Call Recorder, which updates only very rarely (last update in 2013) but still works. We would sometimes have problems with people dropping out, and the recording would get ended if that happened to me.
- We then switched to Google+ Hangouts On Air for a long time. Despite the fact that it was “live” on YouTube, we only invited those who we wanted to be on the show. After the show ended, I would download the video file, use VLC to rip the audio out of the file, and then proceed to edit in Audacity. However, we had problems with audio clipping, people dropping out of the call, and people getting into the call in the first place. However, if people drop out the recording continues, and you can try and join the call back. This is a decent way to run a podcast, however.
- Since I like seeing the faces of the people to which I am talking, we have now switched a regular Google video hangout, which I record using Audio Recorder, which records off of my microphone and my sound card; that means that we could switch between communication platforms and not change how we record. If you’re using Linux, I highly recommend this piece of software, which makes recording a podcast very easy.
- I just use a Blue Snowball USB microphone, which is a massive improvement over the built-in microphone in my computer. If you’re going to do your own podcast, getting a decent microphone is well worth the investment.
That’s all I can think of; any questions?
Cross posted from http://beforeextratime.com/2015/03/various-methods-of-creating-podcasts/