Apologies for the hiatus

November 10, 2011

Hi everyone. Just wanted to check in to say that I’m still alive and still planning to keep up with this blog, but I’m in something of a transitional period right now so I’m not able to keep up with it at the moment. In the meantime, check out this article (that I’m posting sort of so I myself can remember it) –

Digital Diary: A Kindle Disconnect?

I have lots more thoughts I want to record here before too long, but first I have to get through this post-graduation phase. So until then! (Possibly with some appearances in the meantime.)

Share
0

My contribution to the American Library Association’s Washington Office blog

August 9, 2011

Check out the ALA District Dispatch!

Libraries can help users who want to connect — and those who want to disconnect

Please contribute thoughts there if you have any. I’d love to hear your opinions, whether you agree or not!

Share
2

Sonic the Hedgehog vs. Dostoevsky?

July 18, 2011

I just finished reading a fantastic article: Christine Rosen’s People of the Screen, published in The New Atlantis (“A Journal of Technology and Society”) in 2008. There’s lots I could say about it, but I’m going to focus on the argument that Rosen refutes about video games. Jay Parini, a writer who teaches English at Middlebury College, is quoted in the article as follows:

I wouldn’t be surprised if, in ten or twenty years, video games are creating fictional universes which are every bit as complex as the world of fiction of Dickens or Dostoevsky.

For a thorough debunking of the “video games are superior to novels” argument, I only need refer you back to the article, and it has my complete endorsement. But for now, I’ll play with the idea that maybe video games do have something to offer in the sense that Parini suggests, even though I find the hubris with which their superiority is suggested to be patently absurd.

Another caveat: my video game playing experience is not exactly typical of my generation nor of the slightly younger one now glued to their XBoxes, but I do like video games.

Here’s how I played them. I started off playing computer games on my family’s Amiga (an amazing personal computer that was ahead of its time), but I wanted a console system, too. My parents wouldn’t buy it for me, so I saved up and bought one myself. It was such a momentous occasion that I can still remember the date: June 19, 1994. That was the day I emptied out my bank and headed to Toys R Us to buy my Sega Genesis.

The rest of that summer, I spent 2 hours first thing every morning playing all the way through Sonic the Hedgehog 2, up to the very last boss, a giant Robotnik (that was the villain of the game) who I couldn’t quite beat until a few years later. But I kept playing. I got so I could beat the first of the Special Stages with my eyes closed. Pretty impressive, or pretty scary, depending on your values.

What I meant by my earlier caveat is this: most kids don’t feel the need to play the same video game over and over and over.1 But I did. And I liked doing so.

I have thought a bit about this over the years. I favored Sonic over Mario, Nintendo’s plumber mascot, because of the faster pace of the games. Zipping through those loop-de-loops was exhilarating, even if I’d done the same thing 938 times before. I liked the colorful, diverse settings of the different zones through which you raced, and the catchy soundtracks. So maybe this is evidence of the idea that we want things to be faster-paced and more stimulating.

But probing a little deeper, I think that exactly the opposite was true for me. I think that, eventually, I was playing games I’d memorized because it was a way to zone out — to get to that still space that connected, engaging, electronically enhanced books and the Dickensian video games that are supposed to replace them mean to help us avoid. I thought of friends who have taken their knitting needles to college classes or even to staff meetings, and I wonder if the same thing is going on for them. Perhaps a bare minimum of mental activity satisfies the basic demand for stimulation, leaving higher cognitive processes to flourish. (I would like to note that for someone just learning to knit or for someone playing a video game they haven’t memorized and actually want to beat, this would not work. One of my knitting friends had special permission to knit in class while others had no such license, because the professor knew my friend paid attention while knitting.)

I also noted that my most prolific creative years thus far (not that I’m old or anything) were the same years that I spent hours in the morning plugged in to my Sega Genesis. This likely is because in middle and high school, I had more free time (combined with by-then-adequately-developed writing skills) than I have had since. But I think it might also have something to do with the fact — and I remember sitting there thinking these things — that playing Sonic games for the gazillionth time gave me just the right level of stimulus/distraction, without any further cognitive demands (except maybe for the occasional boss fight), to let me otherwise ruminate on storylines and other creative projects.

In short: my Sega Genesis was, for me, a contemplation space.

Like I said, I think I’m a little bit of a freak this way, and maybe this is why I’m the one out there writing a policy proposal about creating more contemplation spaces. I haven’t played many video games since the 32-bit console generation passed because they don’t suit that purpose anymore. (Ugh, get that narration out of my Sonic games. Sonic isn’t supposed to talk; he’s just supposed to run around and collect rings!)

But let’s consider now the idea of video games that follow in the footsteps of Dickens and Dostoevsky, as Jan Parini has suggested, and as James Paul Gee of the Games, Learning, and Society group of the University of Wisconsin – Madison also praised in the article with which I began.

Could a story as deep as, say, Dostoevsky’s The Brothers Karamazov, but with user-driven unfolding complexity, grab the attention of a bibliophobic generation raised on video games?

I obviously haven’t done a study, but I’m inclined to be pessimistic. I’ve played a total of two RPGs (role playing games) in my day. They weren’t really my thing, but I understand the appeal they have for my friends who were active fans of the RPG genre.

But when it comes to really deep thoughts, is a video game player — someone in the mindset of actively trying to advance a plot — ready to sit patiently and read/listen to Ivan Karamazov opining on the existence of God in a world of suffering?

So I here I come to two divergent trains of thought that you’ve no doubt heard before:

1. Maybe Ivan rambles on way too long and he loses too many people, so if he were stuck in a video game with only a few screens to make his point, maybe he’d have to be a bit more pithy and would reach more people.

Or:

2. Even if some might argue that it could have used a tighter editor, you can’t get what is worth getting out of The Brothers Karamazov without the patience to listen fully to Ivan and Alyosha’s dialogue.

I don’t know; TBK is ultimately a courtroom drama, and I’ve heard that those can make popular video games.

And maybe I would play a video game where I had to sit there for screens and screens of Vanya ranting, but we’ve already established that I’m not a typical video game player.

So, I would wager that we’re not going to see Dickens and Dostoevsky coming soon to a PlayStation near you. To make them work there would take the essentially the same effort that encouraging kids to pick up those authors’ works in the first place would take. They would only be interested for those few moments when what you’re offering still looks on its surface like the video games they like; but you’re not going to fool them for long. It’s the same reason people who read Harry Potter don’t necessarily read the classics.

It’s patience and space for pondering that we’re losing. Trying to package it into a hip and trendy medium isn’t going to save that. So hey, go ahead and make the great American game-novel, and I’d try it out. Someone would probably play it. But it’s a digression from the real question at the heart of the war between books and video games.

Screen time vs. page time has more to do with the types of thought involved. Page time, as we’ve traditionally seen it, is usually more conducive to deep thought, though as I have demonstrated, in weird, twisted ways, video games can help with deep thought. I’m just not convinced that this use of the medium is obvious to or even seen as desirable by most players.

My main thesis here is to say that I think video games are fine and it’s great if someone wants to try to write classic literature using them. But what we see “video games” to represent, and what makes old-school education advocates disdain them, is that they don’t promote contemplative thought. A video game that tries to would be great, but it faces the same uphill battle that books face, and anyone who thinks Dickens-as-RPG will address this is missing the bigger problem.

And this is not my main thesis, but I also want to ad: I firmly believe that playing Sonic did help me as a writer to some extent. But on balance, I do wish that, in high school, I had played a little less Sonic and read a little more Dostoevsky. I think my stories now would be richer and deeper. As always, it comes down to a question of getting the balance right.

  1. (I did occasionally play other games. Mostly they were Sonic 1, Sonic 3, Sonic and Knuckles, and Sonic Spinball, but occasionally I played spinoffs of Disney movies and the Animaniacs cartoon show. I also loved all the Sim games, Lemmings, and Where in the World is Carmen Sandiego? But I always put those games in a separate mental category from what I referred to as “playing Sonic,” a category which did include the Disney games.)
Share
0

Attention Curation?

July 15, 2011

I have a question for all of you.

How do you decide what information to consume? More specifically, let’s focus for now on news. How do you decide which news stories to read?

Do you read what flows by on your Google Reader or what your friends post to your Facebook news feed? Do you overhear people talking (or tweeting) and google things yourself? Do you ever read newspapers or magazines or listen to news on the radio or watch it on TV? Is that news The Daily Show?

How do you decide how to allocate your attention? And, moreover, is it more active on your part (seeking out details about certain stories of interest), or more passive (a feeling that the news that’s important is making it to you without much effort on your part)?

I’d love to get your thoughts uninfluenced by my own, but since it’s hard enough to capture a bit of your attention, I’ll just share mine right now.

I used to get most of my news from the Internet. While common perception sees on-line news as the novelty, for me it’s the old way of doing things, and I’ve discovered something that feels to me like a brilliant innovation: television news.

Specifically, the PBS NewsHour. I love it. Not only does it go into considerable depth, but it does the hard work for me. I make an effort to tune in and pay attention, but I don’t have to be making active (but less informed, and therefore strenuous) choices about what I should be reading. I feel like I’m being briefed by someone who knows my time is valuable. Very cool.

It has the bonus of not being in front of the same screen to which I spend most of the rest of my life shackled. (No, no, as I keep saying, I love the computer and I love the Internet. I love that I can stay in touch with friends through Facebook and IM, but I wish I could see more of them in person. But I digress. As always, my issues here stem from a need for balance restored.)

I also like PBS because there are no commercials. It’s not that I mind commercials per se; it’s just that the commercials on NBC/ABC/CBS remind me that I am not the target audience. (Then again, it’s no worse for me than watching hockey games….) Just because I’m sitting in front of a TV consuming news doesn’t mean I want to ask my doctor about Cialis.

I feel like I stumbled across the Next Big Thing, only to realize that the world has already left it behind. But it really works for me! I love it!

This goes into the idea of attention curation. Eli Pariser alludes to this concept in his book, The Filter Bubble. This is about how Google and Facebook figure out what content we’re likely to like (or find useful, in the case of Google specifically; I note this to make it sound slightly less sinister). They want to give us content we like (or find useful) because it makes sense economically. Pariser notes in his book, however, that this has the effect of exposing us to fewer perspectives, since we generally like things with which we agree, and that support our preexisting ideas. That, however, isn’t always healthy for democracy. I don’t think Google or Facebook are being actively evil, but I do agree with Pariser that this could have a detrimental effect on us.

So another reason I like PBS is that I trust them to choose the stories that I need to know, and to bring in a wide range of perspectives. (Whether that trust is misplaced and how it might be earned by other media are other good questions, but I won’t go into those now.)

But most of all, there’s just so much information out there that there’s no way I can figure out what to pick. But thanks to one hour a day of PBS’s attention curation service, I feel like I’m staying on top of the things that matter the most. I certainly use the New York Times’ recommendation service (and I’m a paid digital subscriber), and I follow Facebook posts by my friends, and I search Google for more information about the stories on which I want more depth.

But in today’s day and age, as Google and Facebook know, attention curation is an important service.

So, who’s your attention curator and why?

Share
3

Contemplation space and paper books

June 29, 2011

I just wanted to share an observation:

I spent the past two hours in the ALA’s library room, with my printed articles and highlighers and Sherry Turkle’s Alone Together. (I highlighted pretty much the entire chapter on our “tethered self.”) While I read the articles and book, my brain was churning in a way that felt really great — really effective. I had a notepad there — a paper one, of course — and I wrote down insights and thoughts. I even had a great idea for a short story (when was the last time that happened to this would-be writer!) and wrote it all down, by hand. I had lots of thoughts and ideas and was connecting and synthesizing information, but I didn’t feel overwhelmed; I felt eager to keep going and take the next steps.

Then I came back to my computer and it all vanished. Should I e-mail people first, or find another good article first, or whoops I should make a list of stuff I have to do when I get home that’s important before I forget, and oh, wait, what was I doing again? Do I really need all fifteen tabs open in this browser? Which can I close? AHHHHHHHH.

Oh, and I started a blog entry. Whoops.

This is helping me get a grip and get my focus back, of course, and it’s also topical. But it’s true that this is a distraction machine.

Part of it, of course, is my own fault. I can’t blame it on the computer that taking the next step — weaving coherence out of last hour’s potential brilliance — is hard. Still, all these distractions here isn’t making it any easier.

Okay, so, next: read Cognitive Control in Media Multitaskers from the Proceedings of the National Academy of Sciences. Yeah. I should probably read that.

Also, in Nicholas Carr’s blog (he wrote The Shallows: What the Internet is Doing to Our Brains), I discovered a study that suggests students learn better from paper texts. Check it out. I’ll be reading that shortly, too. But hey, it’s like I was saying!

Maybe I’ll read those back in the library, after I write an outline and some important e-mails.

Share
0

See, that’s why I liked WordPerfect 5.0.

June 28, 2011

I’ve had a web page (as I always defensively remind people who mistake me for a Luddite) since 1996 and the age of 13. Back then, we had recently gotten a new Microsoft Windows machine, which was the latest cool new gadget, so of course I switched to that from our old Amiga 2000.1 One thing I quickly grew to miss, however, was WordPerfect 5.0. We installed it on the DOS side of our new Windows machine and I used that to write my stories, but everyone else was rapidly switching to Microsoft Word.

I hate Word. I really hated it. The 2007 model has finally gained a few words of praise from me, but during the 1990s and most of the 2000s it was my Most Hated Software Ever, even worse than the evil monopolizing Internet Explorer. Most of this was because it kept trying to think for me (like many others, I enjoyed that little Shockwave parody that allowed you to kill the Paper Clip), but it was also because the screen was just full of distractions. Even if I didn’t have Netscape open in another window (which I pretty much always did once we got broadband in 1997), the screen itself was just too distracting.

I don’t want to format while I write. I just want to write. I just want a blank screen. WordPerfect gave me a nice, comforting expanse of deep blue, ready for me to fill with lines and lines of little grey words, with only one line of text at the bottom of the screen that let me know where on the page I was and what file I was working with. That’s all.

I still miss that.

So it’s not because I’m a Luddite that I hearken back to the days of WordPerfect. Even when I was a teenager (already beginning to get sucked into the growing range of digital distractions, but before I even consciously realized it) I valued this.

So I’m thrilled to have read in a New York Times article, Taming Your Digital Distractions, that someone has already come up with a word processor that tries to get back to the pure blank page. This was written in 2009; I wonder what’s out there now. I’m going to have to see what I can get for my Linux machine.

But now I’d best get back to the Microsoft Windows 2007 window I have open, in which I’m trying to write about managing digital distraction.

Step 1: Close the blog window.

  1. Note that the Amiga 2000 is still set up in our basement, because it is awesome. The Windows 95 machine has long since gone to computer heaven. This concludes the obligatory Amiga cult plug.
Share
2

Go take a walk

June 21, 2011

I spent the past few hours collecting all the information I could find about information glut, including reviews of William Power’s Hamlet’s BlackBerry (which I highly recommend) and branching out from there.

My conclusion: there is too much information about too much information.

I got overwhelmed and couldn’t take it in anymore. So you know what I did? I went for a walk. I went to the grocery store and hardware store down the block from work (and a couple blocks from my apartment), got some stuff I needed, took it home, and came back. After this hour break (my lunch break), I am ready to go again.

This shouldn’t be news to anyone, but it might be anyway. Go take a walk if you can’t take in anything anymore! Breaks and physical exercise are both very good not just for your body but for your cognitive processes.

And now I’m off to work on my Google-sponsored paper. I also have a slinky that says “Google” on it sitting on my desk, which also works well to step back from the churning seas of information for a moment. Thanks, Google: way to not be evil! Now I just have to figure out how to save the world. “Give me $50K to save the world,” as my supervisor noted today, doesn’t go over well when asking for grants, so it appears I need to come up with a detailed plan.

Do you have any ideas? What can libraries, policymakers, educators, or anyone else you might think of be able to do to help society manage the information glut and make space to process the information we consume? I’ll put your name in my paper if you spark ideas! I have some ideas of my own, but they’re not ready yet. So I’m off to refine them now.

Share
0

Researching Contemplation

June 14, 2011

I’m blogging to you today from the American Library Association Office of Information Technology Policy in Washington, DC, where I have been finding, downloading, and reading articles related to the issue of “contemplation,” or space to think away from connections and distractions, why we should promote it (the easy question) and how to do so (the harder question).

Even though I have a perfectly good computer here to read them on, I printed the articles out. I did this for two reasons: first, because I wanted to interact with the articles — mark them up with my multi-colored highlighter set and write comments and stars and emoticons in the margins; second and more importantly, because if I were reading them on the computer, I’d be distracted. So I took the articles, my notebook, and my highlighter into the office’s little library room and sat there and read them.

This worked great (as I learned in grad school), but it wasn’t perfect, and the imperfection rested in me and my brain. “Oh wow, this is so interesting,” I thought. “I want to look up this person right now. I want to find this other article he mentioned. I want to know what the weather is going to be like tomorrow. Oh look, it’s talking about how middle-aged people think they’re already having senior moments; I want to e-mail that to my mom who is afraid she’s got early-onset Alzheimer’s, but I’m like that too at age 28 and so this this author! Oh gee, I just thought of a better way to think about something I was worrying about; I should make a note of that. Wait, I’m not focusing!”

That was my thought process as I sat there and tried to focus on the effects of digital technologies on our abilities to focus.

I decide to take a corner of my notepad (intended at first for thoughts directly related to my research) and jot down everything that comes to mind that is irrelevant so it will feel sufficiently “dealt with” for now, and then try to go back to my reading. But then my eye caught a print edition of the Washington Post. “Oh look, look at the headline; that looks interesting. Ugh, I’m so distracted right now; this would be a perfect entry in my blog. I should write about this. And hey, I wonder what the weather is going to be like tomorrow.”

I clearly have a personal interest in this topic.

I’m too young to be having “senior moments.” What I have is a Millennial brain, and I don’t think this is limited only to members of the Millennial generation.

(P.S. Stop calling us Gen-Y, researchers! That is so incredibly uninspired.)

Share
0

Google Policy Fellowship!

May 17, 2011

Good news, everyone! Lots of good news!

First, I am now officially a Master of Information. I received my degree from the University of Michigan School of Information on April 29, 2011.

Second, I will be the 2011 Google Policy Fellow at the American Library Association Office of Information Technology Policy (ALA OITP) in Washington, DC this summer. This is really exciting because it will give me a chance to explore (among other information policy issues!) a lot of the topics I’ve pondered in this blog. And in combination with the lack of homework from this point forward (YAY!) I hope that means that this blog will get a lot more attention in the next 12 weeks. Check out the press release in American Libraries Magazine.

If anyone happens to find this post and is interested in knowing more about the Google Policy Fellowship program, I’d be happy to talk about it!

(Oh, and I’m also looking for a permanent position as a 2011 Presidential Management Fellowship finalist, if anyone happens to be looking for a PMF with interests in these areas! Google, work your magic for me — again! ;)

Share
0

Crisis Info / Info Crisis

March 13, 2011

“OH, NO,” I shouted when I heard on NPR Friday morning about the 8.9 earthquake that hit Japan.

Less than an hour later, still feeling shaky, I was blatantly violating my rule against computers in class. We had a guest speaker that day, and I think it’s pretty rude to be staring at your computer screen when a guest has gone to the trouble to come to your class.1 Nevertheless, I could not tear my eyes away from my laptop screen.

It was a crisis and I wanted information about it. But with all the information pouring in—both about Japan and about all the things, all day, that other people kept expecting me to be focusing on that were not Japan—I was also in an information crisis.

Let’s flash back to when I heard the news on NPR. At first, as they hadn’t given a location, I immediately assumed that it was the Great Tokai Earthquake that they’ve been (and are still) waiting for, and that was always in the back of my mind when I lived in Aichi Prefecture, in Tokai.

I considered skipping class. I know too many people in Tokai and I would have been too anxious and upset to learn anything. My Japanese home prefecture of Shiga is also near Tokai and would have been badly affected. I thought back to the yearly earthquake drills that we had at my base school, where we would file out on to the athletic fields and then watch the kids stand in the hot sun while kyoto-sensei (that is, the vice principal) stood in front of us and gave us a long lecture on plate tectonics. After this, the kids and teachers all liked to joke (in that morbid way people joke about something because the reality is actually quite frightening) that since our school building was built almost a hundred years ago—before Japan became the world leader in amazing swaying earthquake-resistant buildings—that it was going to collapse right on top of them when the Big One hit.

So, I was close to tears thinking that this might have happened; NPR still wasn’t giving place names, so I ran for my computer and pulled up the New York Times, where found out there that it was in Miyagi Prefecture, in northeast Japan and quite far from where I have lived.

Oh. Okay. Deep breath. The high schools at which I worked are still standing. Nagoya wasn’t leveled. The crisis’s distance to me had receded a level. This started to sink in. That was when I put my computer in my bag and proceeded with preparations for class.

But my circle of empathy was still in flux. Yes, the high schools in my Japanese hometown are still standing, but now that I’ve pictured them falling down, it’s too easy to transfer that image to strangers in Miyagi. Just because I don’t know those people doesn’t mean their suffering doesn’t bother me.2 So I was still quite distracted. Some commentators were saying that we are simultaneously horrified and fascinated and that’s why we can’t tear ourselves away from the news—especially pictures and videos. That wasn’t the case for me this time. It has been plenty of other times, but this time, it was more of an urgent need to know what was going on there because in my mind I have some kind of connection to this place, and therefore I need to know what is happening there.

Even when I don’t really need to know. I mean, it wasn’t like knowing anything was going to help anyone. But I still felt the compulsive need for information. I don’t know if other people would have the same sort of residual emotion, but I still felt alarmed enough in that more inner circle of empathy, now enlarged to make room for the Tohoku and Kanto regions along with Kansai and Tokai, that I still had a strong compulsion to get all the information I could, even after I had confirmed that most people I know in Japan were okay (you could feel the quake where I lived, but it wouldn’t even have been strong enough to cause alarm until you turned on the news) and found information that minimized my worry about the rest.

So being in class still did not work well. My plan to compromise by attending lecture but keeping an eye on my laptop was thwarted when we were told to break up into groups—and not with the people sitting near us. I left my laptop at my original seat, thinking that in 5-10 minutes we could move back; an hour later, we were still sitting with our groups.

The separation from my computer was causing me physical distress. This only became worse when our professor decided to check on the news herself while we discussed the assignment; her computer was still connected to the projector announcing the tsunami to the entire class.

ARGHHHHHHHHHHH GIVE ME MY LAPTOP! I NEED TO STEER THE COMPUTER! I NEED INFORMAAAAAAATION!!!

Excuse the TMI-nature (haha) of this detail, but I also really needed a bathroom break at this point, but if she had given us a break, I would have gone first to my computer. Not kidding. The physical stress from tension that I was experiencing by not being allowed to tap into the information stream that my Millennial brain has come to expect in a crisis was more pressing than having to pee.

This is definitely not a situation in which I can press myself or others to stick to an information diet. When you are in a crisis—even when it doesn’t directly, physically have an impact on you; even when it’s just some connection in your mind insisting that this is within your inner circles of empathy—it is far too much to ask that someone be rational about the information consumption.3 But in our constantly-connected culture, since the information is there, we are going to go for it.

So I gave myself a pass, though I felt really rude. As my skin and hair are too light, our presenter wouldn’t have known that I had any reason to be personally concerned about the tragedy that everyone was talking about. I want to write to him to apologize for being so rude. (He would have seen me as being quite uninterested in his talk; but on the contrary, it was quite interesting and I definitely wouldn’t have been on Facebook on a normal day!)

But I wonder what we can do to help ourselves handle the stream of information that we receive in an emergency, whether one that directly affects us or one that just feels like it does. In the former, of course, it matters more, and we are more likely to know precisely which bits of information we need. We can narrow our focus to find, say, a site that lists names of people who have been found safe in Sendai and Natori and Tokyo, or a phone number for an emergency service or a doctor’s name or the like.

But what about when we have a constant stream of news without any goal directing our actions? Like when we Americans (and people around the world) all sat glued to the TV for days after 9/11, watching the constant coverage, even when there was nothing new to report? And what about when need to do other things? People were very, very kind to me this time; whenever I said, “oh, I lived for three years in Japan before starting this program,” they asked me if I was okay and seemed to let me out of all obligations and expectations, though I still tried to pay attention to meetings I had to attend. But I couldn’t. I don’t remember much that happened in those meetings.

It would have been better if I had skipped those and just gone and watched NHK’s video. I had a visceral need to absorb the videos and blog entries and Facebook status updates from my friends in Japan. I needed crisis information; not being able to get to it, I found myself in an information crisis. I don’t want to go back to the time when we only got little bits of information about what was happening to our family and friends far away, but I do want to figure out how better to live in an age where we get it constantly and are able to indulge our desire for information, if only we have a chance to stop and think about something. We’ll probably never be able to do this entirely, just the way that Japan’s amazing building technology couldn’t eliminate all casualties, but we can try to do better.

I’m writing this down now not because I have any answers to these questions, but because I want to record this for the future so I can look back on it when I need to. (You’ll note, perhaps, that I’m posting this two and a half days after I first found out about the quake. I haven’t had a chance to stop and reflect on it until now!)

  1. Which is not to say it’s not also rude to our everyday professors.
  2. This is a digression from my main point of analyzing my information use in a crisis mindset, but I also am interested in circles of empathy, so let’s go on about that for a moment. The suffering after the quakes in Haiti and Chile and New Zealand would have bothered me considerably too, had I thought about the subject this much. But because I had no direct connection with those places, I was able to filter it out somehow as “happening to other people, far away.” That’s a normal emotional defense mechanism and it’s what keeps all of us from being in tears all the time. But I had lived in Japan. But then, I hadn’t lived in Miyagi-ken. But now they were within my circle of empathy because I’d so vividly imagined their plight. And now that I’d stopped to think about it, so were all the people in those other countries. Just the previous week I had responded to the American Library Association’s request to help fund the rebuilding of a library in Haiti. A lot of you are in library-related fields, so hey, maybe you’d consider helping them?

    Anyway, when I saw my friend Chiharu yesterday, as I always do on Fridays, she said she was amazed at how generous Americans were. I was truly touched by this and I would like to live up to her high estimation of us. I must also note that Japanese people were eager to press 1000 yen notes into my hands when I worked at the U.S. Pavilion at Aichi Expo while Hurricane Katrina was battering down on our coast. So they would do the same for us. But if you want to help Japan or any of these other places where people are suffering, check out the American Red Cross‘s donation page.

  3. I’m certainly not saying dismiss all rationality: cool-headed thinking and the like are obviously of the utmost importance in crises that within a certain level of proximity. But maturity alone can guide us on when/how we should give ourselves breaks and when/how we should be extra-strict with ourselves.
Share
0