Trolleybuses.net was born in 2001 as an outgrowth of pictures formerly hosted on Dave's Electric Railroads. Dave wanted to divest himself of the trolleybus pictures he'd accumulated over the years on his outstanding site. I picked the ETBs up, as I already had a website on Dayton OH trolleybuses. Since then, several thousand more pictures have been uploaded -- as of Nov 2006, there are over 5000 photos hosted. Several other folks picked up portions of the pictures Dave hosted, and links from those pages are on the cities in question.
The site is non-profit (notice the lack of ads?), and is dedicated to the education of the public regarding the history of electric bus transport in Canada, Mexico and the US ... aka "North America". One commentor (from outside North America) complained that (in what I believe his small minded worldview), the site should (in his words) more accurately be trolleybuses.us. Of course, such thinking completely discounts the long history of trolleybus operations in Canada and Mexico. The site does not support copyright infringement -- as such, if copyrighted photos are discovered, they're taken off the website.
It's built using a text editor, with pages autogenerated from a Word template using data in an Excel spreadsheet. Each picture has an HTML webpage associated with it, in order to display picture and caption. Each picture follows a naming convention: country_(photo size m|tn)_city abbreviation_bus manufacturer_number_location_date_contributor. Where information is unknown, it's obviously not included. In turn, each city has it's own webpage, with directories for the thumbnail pictures, full size pictures, and the individual HTML picture pages.
The "Forward" / "Back" convention from Dave's Railpix was ported over at the start, Problem was it was very difficult to find where multiple contributors had supplied the same shot. That's why the picture naming convention was instituted. This generated a problem in that each individual photo page would need to be reedited if the display order changed (eg, pictures added). Nobody's got that kind of time to make the change in possibly hundreds of webpages, and then be confident it worked right. I know -- I tried for awhile.
Early in 2006, this problem became more acute, due a large volume of pictures from one contributor (which, by the way, is a fantastic problem to have!), and the necessity to provide both a caption and a credit. Where the old format could work on a page with a dozen or so pictures, some pages were going to grow in excess of several hundred pictures. Several days of websearching posed this answer: Why not use a program to automatically generate the HTML pages, given a database of info surrounding each picture?
Enter Microsoft Word and Excel. For each page, an Excel spreadsheet was built, which contained the following info:
Picture to be displayed
Picture behind it
Picture in front of it
Line for the table command on the city page
Excel's ability to manipulate text is used to generate the picture pointing (forward/back) info, based on the picture filename in the "Picture to be displayed" field. All one does is copy this info from line to line, and Excel does the rest. City, company, and coloring are copied from one row to another.
Issue then became ... how does one get a directory full of pictures to go into an Excel column? What I had done in the past was to shell out to a Command Prompt, use the "dir > filename.txt" to dump a directory to a text file, and then bring that text file into Excel and edit it. About 4 more steps than was necessary.
Enter dirlist.bat. After searching on the web, this extremely useful construct was found on linux-noob.com forum post:
dir /B %1 /-p /o:gn > "%temp%\Dir Listing"
start notepad "%temp%\Dir Listing"
What the construct does is to dump the contents of a directory into a notepad text file. The text file can be pasted line by line into Excel. The batch file was linked into the XP shell, so that the directory one wants a file list from can be right clicked, dirlist selected, and the text file will be generated. One more step to copy and paste. Voila!
With the listing of pictures now one per line, a caption and credit info can be associated with each picture. However, that line by line list needs to be turned into a webpage.
The solution to this problem was found here:
Individual Merge Letters. Scroll on down to "Add-in to merge letters to separate files" and "Naming the file from the data source", where the page directs you to use a form letter merge document directly to individual Word documents via a Word add-in template. One extracts the MMtoDocsRevnn.DOT template (where nn is the latest revision number) to the Word startup folder - its location defined in Word at Tools > Options > File Locations > Startup.
Using the info on the page, one can now make a gigantic Word document which consists of what would be individual webpages, one per page. The next trick was to find a way to split the pages up and have Word generate the pages. Further on down the gmayor page, look at "Split the single merged document into separate letters". This is a macro which splits up the pages ... provided you gave a filename on the first line of the page you're making. To do this, more text manipulation in Excel was used to put that targeted picture page filename into the first line of the future webpage.
Yet another problem -- the resulting file was a Word document. To make that file into a text document, the macro had to be modified a bit. In the Sub SplitMergeLetter macro (which is the one used), the command " ActiveDocument.SaveAs FileName:=Docname, FileFormat:=wdFormatDocument" had to be modified to "ActiveDocument.SaveAs FileName:=Docname, FileFormat:=wdFormatText".
HTML files are generated out of a page template I created, and picture order can be sorted in Excel, and the pointers to forward and back update themselves, provided the pages are all regenerated. New pictures can be added to the Excel file, and when the Word template runs, it makes the new pages. Typographical errors are minimized, as if the info is correctly fed from Excel, the page is correct. If an error is found (and there are caption errors), the Excel file is updated, and the page regenerated. It takes less than 5 minutes to generate 600 or so pages on a Pentium 500MHz laptop with 256MB of memory, and if a smaller subset of changed pages is needed, the template can accomodate that. After generating individual pages, each picture's calling page from the city needs to be re-edited, but that's easy because the construct is also embedded in the Excel file. That could be automated, too, but for 60 or so cities, it was easier to just hand-edit each page.
Due to the penchant of people thinking bandwidth theft from others who pay for it is a right, individual pictures are not off-site linkable. However, each picture has its own webpage, and that webpage can be linked off site. Many have complained about this policy, but it's the way it is ... if you want to grab a picture, upload it to a picture storage place, and link that into your blog or forum post, you can do so, but in doing so, you won't steal bandwidth (which I pay for and the leechers don't) more than once from trolleybuses.net.
Several programs exist which allow photos to be batch (a group at a time) resized. I use one of those -- starting with the _m_ photo, I simply resize to 80pixels by 60pixels, targeting a 2K thumbnail. They allow the use of autogenerated names. I use a program called CKRename to put the names out of the batch resizer into the correct syntax for the webpage. Batch image resizers also have an interesting aspect to them. Nowadays, cameras and scanners embed metafile info (date, camera model, time, etc) into pictures. Sometimes as much as 15K worth. When you *wash* a picture thru a resizer, that 15K of data goes down the drain.
I've used Corel products now since Corel 3. We're now on Corel 13. I don't necessarily recommend Corel for someone starting out new -- because it has a sort of goofy interface, particularly in PhotoPaint (I still really like CorelDraw). Trying to explain to others how to use PhotoPaint has proved difficult many times, so I don't recommend it anymore. Nowadays, I suggest people use Photoshop, and to buy a book. I'm happy *enough* with Corel, so far. I try to make the resulting display page 12-13" wide at 72dpi, with a disk size no larger than 100K, with credit and website info pasted in.
Go to Planiglobe.com to generate a map. I generated a map of Canada, the US and part of Mexico, and then shrunk it to 950 pixels wide and 436 pixels high, and turned it into a JPG (950X436 shouldn't be too hateful on a 1024x768 display). I edited the JPG to name each city, and I put green and red dots on the map to delineate active versus closed city operations, and then by trial and error, located those dots so that the HTML AREA command would work.
For the giant (in excess of 200 pictures) city pages, I use frames in order to be able to navigate. There hasn't been a hue and cry to change this (the frames are HTML 4 compliant), so I haven't. I did this so that one wouldn't have to wait for 200+ thumbnails (at 2K apiece) to load on the big pages. The page template for each individual picture works in a framed, or non-framed page.
In early 2006, a commentor posed the question of whether the page was compliant with anything. Prior to this, the pages had admittedly been haphazardly hacked together. While they worked in most web browsers, there was little rhyme or reason. That point also drove the use of the site validator at validator.w3.org. Compliance adds about 1K worth of info to each page, but it quiets the naysayers, and it makes the pictures more Googleable. Can't guarantee every last page is compliant, but most of the several thousand are.