Archive for the ‘web2.0’ Category

Web 2.0 – Hey, that’s my data!

August 2, 2007

I was looking at my feed from Boxxnet for Web 2.0 related items and I saw one called “Hey, that’s my data!” from Canadian Technology News. And like any good blogger, I stole what I could from the post, including, in this case, the title.

Before I even read the article, I had an idea from the title, which was that what we write online is “up for grabs” from anyone and their brother (or sister). So what if someone decides to write a book and publish it and uses only information that has already been written from other people, without giving any credit (or money) to those people who actually wrote the book? Or who takes the best of flickr and makes a beautiful coffee table book from the pictures they find? Or I watched a show on TV that is ongoing that is just a bunch of YouTube videos they have found on the internet. I can’t remember what it is called but I just did a little looking on my digital cable and found a show on the Comedy Channel called Web Shows and the description is “A compilation of online videos”. When I went to ComedyCentral.com, I could look it up but when I clicked on “go to site” it took me to a page with episodes they had on the web (I think). So I looked clicked on “Go to TV schedule” instead and it too me to the schedule for that show and described it as “This groundbreaking half-hour series features several of the internet’s best webisodes and short-form content.”.

Anyway, I know I have watched shows on TV made up of videos that other people have made and posted on the web. Now I don’t have a problem at all with people sharing information that I have written or posted or videos or pictures I’ve taken. That is the beauty of the whole Web 2.0 concept. That it is greater than its parts. But what control is there over people taking the creative and hard worked things that people have done and using it to just make money?

Or what if someone wants to use something that you created in a way that you don’t agree with? What if, for example, you took a series of beautiful nude photographs and posted them on flickr as an art set. But someone copied them and put them in Hustler magazine as “Hot Chicks from the Web”?

Or for that matter, for something a little closer to home, usurped your website and redirected to a site you found offensive? We had a website at one time that we no longer use, but since I was into koi ponds at one time and posted pictures and descriptions of our ponds, there were links to it several places. However, a porno site redirected our links to its site and even worse, it had a million popups and all sorts of things so once you got there, you couldn’t get out or stop the madness. I tried every way possible to do something about it but had no luck. I couldn’t even edit the places where my link was posted, or in most cases, contact the person who could.

And back to the point of the post that originally sparked this thought, what control do you even have over anything relating to you on the internet? The original post was subtitled Why we’re all on Facebook, whether we like it or not” and dealt with a situation even closer to home that I am sure we all can relate to. It is about how this person had been at a party on a cruise ship and found his picture (looking rather raggedy) on someone’s facebook page. Here is a quote: “This is what happens to data in an age of social networking. We don’t necessarily create the content, we don’t store the content, and we have little to no control over how it is managed, distributed or manipulated. At the moment, if all you knew about me was the stuff about me you found on Facebook you’d assume I was a haggard-looking ne’er do well who spent too much time boating and not enough time sleeping. Which might be true, but it’s not the entire truth.”

I highly recommend you read his post, he has much to say on this particular issue and I don’t really need to re-state it here. I guarantee it will hit home and raise some interesting questions.

And as you can see, I am not above stealing a catcht title, or using what someone else has written. Are you?

~Susan Mellott

Advertisements

Library 2.0 – ACPL’s New Books Wall Mashup and More!

July 31, 2007

Sunrise Alley by Catherine Asaro Just for fun, I created an old-fashioned card catalog card for the book I had downloaded from baen.com/library using John Blyberg’s card catalog generator. You have to enter the data by hand (I copied it from Amazon.com) but it makes a really fun graphic. With some programming, you can make a mashup that uses this. A mashup is a website or application that combines content from more than one source into an integrated experience.

Sean Robinson (my husband and head of IT Technology at the ACPL) created this book wall called Books we added to the catalog yesterday combining the new material checked in each day at the ACPL (Allen County Public Library) with data from Amazon. It shows pictures of the actual book covers for each book and if you click on a book cover, it will show you an old-fashioned card catalog for that book and information on it from Amazon (if the book is brand new, it doesn’t necessarily have review info yet).

Then you can click on “Look this up in our catalog” to see the ACPL card catalog information on that book like how many copies there are and if they are available and where they are located and do all sorts of neat things like add it to your list or put it on hold. You can also find more books by that author, more books with those topics or browse nearby call numbers (books that would be on the library shelf with this book).

Go check it out and play around with it. It is a great example of how you can combine Web 2.0 tools to create something new and exciting and useful.

For this and more innovative ways the Allen County Public Library uses Web 2.0, visit their Library 2.0 site: ACPLib2.0. ACPL Rules!

~Susan Mellott

Celebrity 2.0 – Wil Wheaton is Web 2.0

July 31, 2007

I imagine most of you know who Wil Wheaton is. He is an actor who played Wesley Crusher on Star Trek: The Next Generation. Actually, he has done a lot more than that, but that is mainly how I know of him.

But what makes him interesting is his love and knowledge of technology and his leading edge use of Web 2.0 tools. Here is the wikipedia entry that talks about him and what he has done.

From wikipedia: “After leaving Star Trek, Wheaton quit acting altogether. He moved to Topeka, Kansas to work as a programmer for Newtek, where he helped develop the Video Toaster 4000.” (I assume they meant he temporarily quit acting)

Wil was a very early adopter of blogging, creating his site wilwheaton.net (see the wikipedia article on his blog) which is currently being updated (since about last June) and is replaced for now by his blog WWdN: In Exile – Wil Wheaton’s not-so-temporary blog. Per the wikipedia article on his blog: “Rather than just a fan forum, it was a place where people could gather to talk about various subjects including movies, music, books, religion, politics, gaming, geocaching, and miscellaneous topics; the original emphasis was on topics of interest to Wil Wheaton and not the man himself.” He has entries on his blog dating back to July 2001.

Wil also has written 3 books, and most of the entries are extended versions of his online blog entries. (Take note, bloggers, this is not a bad idea if you have a following).

Also from wikipedia: “In late September of 2006, Wheaton began hosting a Revision3 syndicated video podcast called InDigital along with Jessica Corbin and veteran host Hahn Choi. ” Of note: Wil found an error on the wikipedia entry for himself and asked on slashdot for someone to correct it.

Wil also twitters regularly and has just recently twittered on the Comic-Con he attended. Interestingly, he is having a problem at the moment trying to remove people he no longer wishes to follow and is talking about it on twitter. Update: as of about 4 hours ago, he twittered that the problem was a bug in twitter and was fixed by Biz Stone.

Wil also uses flickr and has some very interesting photos. And something I found interesting too that Wil has been doing on buzznet is “What is Wil looking At?” which is sort of a cross between flickring and twittering (flittring?). It looks like he is taking pictures with his phone of whatever he is doing and uploading them. It’s a neat idea and I’m sure at some point, people will be doing that just like they twitter now.

And of course, he checks technorati for links to his blog and has a profile technorati for wilw. Here are some other things of his (from his blog):

And there are quite a few interesting videos of him talking about technology on YouTube. Here is one where Wil talks about Podcasting (answering fan’s question at reading of his book, Just a Geek)

And there is a lot more that he is or has been involved with. The wikipedia article and his blog has more information.

To be honest, although I knew who he was, I’m old enough that I watched the original Star Trek more than I watched The Next Generation. But I think he seems like an interesting person and certainly one who is Web 2.0.

~Susan Mellott

Web 2.0 – Code4Lib addresses Data Mgmt – When will You?

July 30, 2007

As you know, I’ve been concerned about the institutions that host data for Web 2.0 applications. Code4lib, a major library 2.0 site (and everything else hosted on anvil.lisforge.net) was hacked on July 21 and is still not available. They are hoping to have everything back on Aug 1 – we’ll see.

And 6 back-to-back power outages hit the SOMA neighborhood of San Francisco last Tuesday afternoon causing major havoc with popular web services. 365 Main was down, along with craigslist, Technorati, Yelp, AdBrite and SixApart (including TypePad, LiveJournal and Vox). Many other popular sites such as CNet were unavailable too.

I wrote a couple of posts about these problems and suggested that it is is greater issue earlier – this one on the 365 Main Outage and some thoughts and this one on if you trust online sites to protect your data re: Code4lib.

Well, Code4lib is taking this seriously (as they certainly should) and is hosting a special discussion on August 1st to discuss this. Here is the announcement from their Planet Code4lib website (the only code4lib site currently available).

“You are invited to a special discussion in #code4lib on irc.freenode.net on 1 August 2007 at 1900 GMT about how to prevent this from happening again. We’re going to be talking about moving some of the web applications to institutions that are better set up to manage them.”

I am thrilled that code4lib is now thinking about this and I hope they can recover all their data in a timely manner. And I hope that other organizations that are heavily web-based will follow their lead and seriously look at who is hosting their data and that they are thinking about ensuring that they know what is in place to protect them.

In the Web 2.0 world, it isn’t just about content and collaboration and new ways to interact. Now that these Web 2.0 concepts are coming to fruition and are becoming valuable resources, it is time to look at making sure they are operating in a stable and protected environment.

~Susan Mellott

Web 2.0 – What does an Organization Really Need to Get There?

July 28, 2007

This was originally written to update my “About Me” page. But it turned into this. These are the posts that prompted this post – MLS and Library Technology, a post on Why require an MLS for library technologist about a post on code4lib regarding an MLS degree for library technology postings (which unfortunately is currently unavailable since all code4lib.org sites are down). And here is an interesting post about an opposite perspective called I Didn’t Get an MLS to do That and another about the MLS degree in general called The Embattled MLS in the Library Journal. Which begs another question about whether or not an IT degree should be a requirement for librarians. But that is a post for another day. Anyway…

I said I am a coder. But it is better to say I was a coder. I did love to code. But honestly, I’ve gotten less interested in it since I’ve retired. What I really love to do is to listen to what people want to do and then translate that into something that solves their problem and/or enhances their technology environment.

For as long as I worked, I was what was known as a Programmer/Analyst. That means that the majority of my time was spent conducting client interviews, learning the ir processes, creating client/IT teams to discuss what the goal is and then doing a lot of analysis and design to get to where they want to go. The coding, although fun, is the easy part.

I had to take a concept that someone had and translate it into something functional that transcends their original thought and turns it into a working, creative, useful application. You might not realize what this involves. Most of the time, people don’t know exactly what they want, they just know they want it. This is actually the best scenario. It is really harder when people think they know how to design what they want. There is a reason why there are special IT analysts/architects. We spent a lot of time and have a lot of experience designing technology solutions.

Just as people are experts in their own field such as financial organizations or non-profits or libraries, so are IT analysts experts at translating what someone else does into a technology based solution. And just as I could not tell you the formulas for calculating statistical risks for life insurance, neither would a risk assessor know how to take what they do and make it user-friendly and technologically innovative.

I think one of the problems organizations are having with going Web 2.0, is that they don’t recognize that they need a person who can look at their processes and design a Web 2.0 solution. I’ve done that for many, many years and I really find it surprising that other organizations (such as libraries) that say they are wanting to have an online presence and to go Web 2.0, don’t even seem to realize the need for someone with those skills.

I worked with various functions in life insurance most of my IT life. And I have little to no background in life insurance. It is not my field. But it never needed to be, nor should it have been. There were ample experts in all facets of life insurance that could determine the formulas needed and the results expected, and could take me through the processes. My expertise was knowing how to listen to what people want, to learn how they currently do it and to design a technologically progressive solution that goes beyond what they envisioned and yet still satisfies everyone and is not intimidating. It’s really a very complex job.

I have to confess, I find it funny (sad) that the IT positions for libraries all seem to require an MLS (master of library science) degree. That makes no sense to me. There are plenty of people with library skills and knowledge already in a library. What is lacking is anyone who is able to look at the processes from an IT design perspective and to pull all the areas and processes together into one, creative, innovative and functional design.

I also hear the arguments that you can’t talk to a librarian or understand a librarian unless you have an MLS. How can that make sense? I’ve talked to actuaries and lawyers and accountants and life risk assessors and all sorts of people with their own expertise and language and ways of thinking. Why would a librarian or academic or anyone else be any different? I’m not stupid. I think I can grasp how most jobs and functions work and I think I can talk to most kinds of people and be understood and understand them. And I know how to create a team that includes expertise from all areas so that everyone contributes in ways only they, with their knowledge, can.

Next time your or your organization are thinking about hiring an IT person, think about what you are trying to accomplish and what needs you have that you don’t have internally already. Then look for someone who can determine where you are, where you are going and how to get there in a way that includes everyone and appreciates their expertise while contributing their own expertise. Be understanding of each other and teach each other. Then sit back, let go of the reins and see how far you can go.

~Susan Mellott

Major Web 2.0 Sites Down from Power Outage – They need a lesson from Big Business.

July 26, 2007

Power Outage in SF Tuesday brought down major Web 2.0 sites.

6 back-to-back power outages hit the SOMA neighborhood of San Francisco Tuesday afternoon causing major havoc with popular web services. 365 Main is down, along with craigslist, Technorati, Yelp, AdBrite and SixApart (including TypePad, LiveJournal and Vox). Many other popular sites such as CNet were unavailable too.

Interestingly enough, a “source close to the company” (365 Main) had this to say:

“Someone came in sh*tfaced drunk, got angry, went berserk, and f**ked up a lot of stuff. There’s an outage on 40 or so racks at minimum.” ValleyWag had a good article on this with lots of interesting links.

This however was unlikely as the cause since the area had been having power outages and clearly their UPS system did not function properly.

The San Francisco website Laughing Squid has a write-up of the power outage .

Here is another informative post from Radar.OReilly.com

Six Apart, in a very 2.0 move, kept everyone updated via its twitter stream.

But the real question is, what happened to their power backups? They should be able to keep running regardless of any lack of power. This is a good post about what 365 had to say regarding its “Credibility Outage” (and basically they made a bunch of excuses).

So again, do you trust your Web 2.0 online providers? Clearly there is a gap between what “should” have happened and what actually did happen. Datacenter 365 Main released a self-congratulatory announcement celebrating two years of continuous uptime for client RedEnvelope, mere hours before today’s drunken blackout.. [PR Newswire]

And without extensive testing and backout plans, it is hard to know what exactly would happen if something happened like a server being hacked or a major power outage. I would be more interested in the disaster recovery plans and testing they (or any major player) had done than in what they theoretically think might happen, based on the things they think they have in place.

Coming from a big business background, where their only real commodity is data (in my case, insurance), I have seen and been involved in a huge amount of disaster recovery testing and planning. I remember what they, and other businesses went through for testing for the 2000 rollover and for any number of other potential disasters. September 11th tested their and many others disaster recovery plans. The Stock Market and major banks and other financial firms simply cannot just go down or lose data, for any reason.

But as we move to a Web 2.0 world, companies like 365 Main are now also the repositories of major amounts of data and for many Web 2.0 companies, their business is data, just like financial institutions. It’s not small potatoes anymore. Face it, they are big business now and need to act like a big business. I’m sure they are bringing in big business income. So who holds them accountable? I’m wondering if many of these Web 2.0 companies didn’t grow from such small beginnings that they may not even be aware of what they need to ask and know from their provider.

And unfortunately, I hear people with a business background being dismissed as “luddites” or “1.0” or “dinosaurs” or just not with it, supposedly not able to comprehend the new 2.0 world. It reminds me of when PCs first came out and I started programming them after having had a mainframe background for several years.

PC programming was wild and wooly. There were no standards, no one documented their code so maintaining it was a nightmare, and people would see how many functions they could put on 1 line of code (more being better in their mind). An “elegant” piece of code would be completely undecipherable by anyone (which seemed almost to be the point) and would have no documentation. Which meant of course, that the code for most of the programs were a mess because no one could figure out what the last person did so they hacked around it. But if you were from a mainframe background, you supposedly could not “understand” PCs and were basically a dinosaur. Well, I know that is a bunch of nonsense because I didn’t have any problem understanding PCs and PC coding. What I didn’t understand was why they allowed projects and programs to be so sloppy and poorly run and written.

It was a real case of 1.0 technology meets 2.0 technology. In this case, Mainframes vs. PCs. Now it is happening again with Web 2.0. And regardless of what the current “New Thing” ™ is, one thing they all have in common is the belief that they know more than the people who have used the ‘old’ technology. But what they don’t realize is that they really haven’t learned anything at all yet. They have a great direction and new ideas and concepts and great plans, but if for no other reason than that the technology has not been around that long, they don’t have practical experience and a background to build on. I’m sorry, but while college gives you an in and a piece of paper to say you are somebody, the real learning starts when you start applying the knowledge in real world situations.

I remember taking a LOMA (life office management) test on data processing and thinking it should be a piece of cake. It turned out to be one of the hardest of the set because I had to learn what they thought the right answers were, not what was actually correct. I had the same experiences in higher education where what was being taught was so outdated that it was really completely wrong and in my opinion, was harmful in many ways to learn, especially if you thought you knew something afterwards.

And this is where I think the 2.0 arrogance is showing. It is a wonderful new way of doing things, but there are many foundations they could and should build on that have already been figured out. They can take what has been done to new and exciting levels, but reinventing the wheel for every single thing is pointless and causes the new technology to be without wheels for a while.

~Susan Mellott

Web 2.0 – Do you trust online sites to protect your data?

July 25, 2007

Per the web site Planet Code4lib, the entire code4lib.org websites have been rendered unavailable. Here is what was said: NOTICE: The other code4lib.org web sites, and everything else hosted on anvil.lisforge.net, are unavailable. The server was hacked on 21 July 2007 and will be restored in a week or so. Join #code4lib on irc.freenode.net if you need to know more.

I found that out when I tried to follow a link on Technosophia about a post on Library Web Chic about a post on code4lib regarding an MLS degree for library technology postings. Since this is something I have some opinions on and am thinking about for a post, I was very interested in what others had to say. But when I tried to access code4lib.org, the site (and all related sites) were completely down.

Oddly enough, I had also tried to access the site earlier to see what Code4lib conferences were coming up and could not access the site but did not realize the problem until I saw the announcement on Planet Code4lib.

I know what a wealth of Web 2.0 information and collaboration is on that site. This is an interesting test of what happens when something like this happens. Hopefully code4lib has good backups. But what is to guarantee that this site, or any other sites, are able to restore the data the people have entrusted with them?

Are we carefully considering where we put our faith and our important data? Do you know what the backup capabilities are of places where you have your online data? If you host the data yourself, obviously you are responsible for it. But what about all the sites that host data for other people? There are many of them and I’m sure we all use and put our faith in several of them each day.

What if the site that hosts all the Google Blogger blogs (blogger.com) crashed and the data could not be recovered? What impact would that have on everyone? I use Yahoo! mail and there have been a few times that it has been unavailable for several hours (even up to a day or so) and I was really, truly messed up. I had appointments with people and people I needed to contact and all their contact information and arrangements we had made were trapped in my Yahoo! account that I could not access. I was sweating bullets hoping it would come back up before I missed an appointment or something important that someone emailed me.

Granted, it is my responsibility to keep my important data, but how many people don’t think of that until it is too late? People are learning and exploring and using the new Web2.0 technology, but is it growing faster and more wildly than can be sustained? Do people even think about things like this? Should they?

~Susan Mellott

Politics 2.0 – YouTube videos Address Energy Bill

July 25, 2007

There is a new channel on YouTube called CleanMyRide. This is what it has to say about itself: “This channel is aimed at making people aware that Congress is about to begin an important debate about the energy bill. The bill is a good start, but it still needs provisions to take on the really big stuff – increasing gas mileage requirements and mandating the availability of flexible-fuels. These tough solutions will slash oil use and slow global warming.”

One thing new about this is the high production quality and celebrity involvement. Some of the celebrities in these videos are Matt Damon, Ben Affleck, Jason Biggs and Jennifer Garner, to name a few.

Here is the first video:

The videos are funny, informative (slight adult content) and really rather addictive. Ben Affleck is hilarious in part one as a big piece of “street” corn. Here is an article from People Magazine about the videos.

Check out their website to see some really cutting edge Web 2.0 used for a campaign to inform people about an important bill before Congress.

Visit CleanMyRide.org to learn more and sign the petition. Tell Congress: Clean My Ride!

Project Phin - Clean My Ride

~Susie

Politics 2.0 and the Digital Divide

July 24, 2007

So politics and the presidential campaign is going 2.0. While I am certainly a strong proponent of this, it does raise the question that this is slanted towards the technologically advanced and/or those who have the means and knowledge to use the Web 2.0 technology. This potentially excludes large segments of the population. Many people who were not raised in the era of computers and PCs do not understand even what is available, much yet how to use it. This would seem to greatly lean towards and garner a younger audience then. And those who are older who do know the technology are probably those who work in technology and/or have had access to and knowledge of all the new Web 2.0 technology. Therefore, this would encompass a primarily white-collar, upper-class population and exclude those who have not had the means or did not work with technology.

I think this is one area where our school system and our libraries play a huge role. Our schools need to provide training and funding for every student to learn and be able to apply technology. And our libraries especially, can educate and enable everyone, regardless of age, ability or economic status. I think this is a direction that libraries need to go and I think they need to get the funding to do it. I don’t know that I think the libraries are where the sole responsibility for this lies, nor do I even know if they are necessarily the places that should take this responsibility ultimately. But I do know that if the Public Libraries don’t do it, there will be a large portion of the population that will be left behind.

I cannot think of a public organization / facility that could come anywhere near the ability that libraries have to reach and educate the public and to provide access for all people. I know what a difference it has made to have public computers in the libraries and when I see someone who probably isn’t sure where they will be sleeping that night, come in and sit down at a computer and and be the equal of anyone else, I am proud of what our libraries can give and this is something that I think we all need to encourage and promote and consider when funding is needed for our public libraries.

I find it interesting that of any or all of the public institutions that we have created, I can really only think of libraries as one that has the capacity to serve the entire public in so very many ways, regardless of age, means, ability or any differentiating quality.

And the only problem that someone might run into with using a library is that they have difficulty getting to the nearest branch. So I think it is very important for libraries to keep their small neighborhood branches, including (especially) those in poorer areas since they can serve a population that perhaps can’t easily get farther than they can walk. I do worry that the tendency may be to improve the branches in the richer areas and neglect the ones in the poorer areas, especially since the richer branches may be more used. But the poorer ones may be more valuable. Actually, I remember when the bookmobile used to come down our street. They are no longer running and I think that is a mistake. But this is fuel for another post ๐Ÿ™‚

Anyway, along the digital divide lines, here is a post from the PBS.org teachers blog where after a June debate, the political candidates were asked about this. Here is a quote from that post “After the event, I had a chance to speak with four of the candidates about their perceptions about the digital divide and the role schools might play in bridging it. The lesson learned: itโ€™s hard to get more than a sound bite when the candidates are in spin mode.” And here is a link to this very interesting post.

~Susie

Library x.0 or Who will Preserve the Data?

July 19, 2007

A long, long, time ago, when Compuserve was one of the major players in internet connectivity like AOL (it dominated the field in the 1980’s), back before it had a GUI interface and was still all line-based, I belonged to a group called Church of the Bunny. This was in the early ’80’s and it was definitely bleeding edge for the times. I remember 300 baud modems that you put the handset of the phone into to use. I was lucky enough to be working with PCs so I had access to some things that were somewhat unaffordable or inaccessible to a lot of people outside the ‘geek’ fringe.

Anyway, Church of the Bunny was a community and we talked and laughed and had our inside jokes and it was an important part of my life for several years. We used files to store and pass around our Church of the Bunny manifestos and credos and whatnots and we had a newsletter that was was published and mailed to the members. We would meet up when we got out each other’s way. I still have the old newsletters and thank goodness because they are some of the only existing pieces of the Church of the Bunny I can find. Since this originally started pre-web and on Compuserve, the files pretty much went away and the few websites that were created are no longer in existence and it is all gone. In my searches I found this little blurb about The Holy War between the Church of the Gerbil and the Church of the Bunny. That’s about it. There are still several references but all the links are broken or changed.

We had a community built up and a whole “world” so to speak with a very detailed society and many, many writings and files and correspondence and articles and it is all gone. Vanished into thin air. People left Compuserve so all that archival information was gone and people created web sites on various servers and hosts that folded, or moved or just disappeared. And that was a minuscule portion of what has been lost that was on Compuserve. And there were many other hosts that folded or people moved away from, like Tripod or Geocities or any number of others.

So here’s a thought to ponder as we move into Web 2.0 and the online collaboration and social networking tools. How can we preserve all the collaboration and information and social networks as the various platforms evolve and change and come and go? I am sure we can all think of a tool we have used on-line that has been replaced by something newer and more popular. How can a migration of data be accomplished or at least, who is able to catalog and store this data?

If Web 2.0 is a new way of writing and spreading information, what role does Library 2.0 play in keeping that data intact and able to be accessed by other people? Just like libraries are the archives for books and have played a major part throughout history in preserving mankind’s writing and knowledge, what is the equivalent in the 2.0 world?

And what risk do we incur by going electronic and putting information on an electronic medium without a methodology in place to catalog and store it?

Just some thoughts on a rainy day.

~Susie