Category Archives: amazon

harpercollins takes on online book browsing

In general, people in the US do not seem to be reading a lot of books, with one study citing that 80% of US families did not buy or read a book last year. People are finding their information in other ways. Therefore it is not surprising that HarpersCollins announced it “Browse Inside” feature, which to allows people to view selected pages from books by ten leading authors, including Michael Crichton and C.S. Lewis. They compare this feature with “Google Book Search” and Amazon’s “Search Inside.”
The feature is much closer to “Search Inside” than “Google Book Search.” Although Amazon.com has a nice feature “Surprise Me” which comes closer to replicating the experience of flipping randomly to a page in a book off the shelf. Of course “Google Book Search” actually lets you search the book and comes the closest to giving people the experiences of browsing through books in a physical store.
In the end, HarperCollins’ feature is more like a movie trailer. That is, readers get a selected pages to view that were pre-detereminded. This is nothing like the experience of randomly opening a book, or going to the index to make sure the book covers the exact information you need. The press release from HarperCollins states that they will be rolling out additional features and content for registered users soon. However, for now, without any unique features, it is unclear to me, why someone would go to the HarperCollins site to get a preview of only their books, rather than go to the Amazon and get previews across many more publishers.
This initiative is a small step in the correct direction. At the end of the day, it’s a marketing tool, and limits itself to that. Because they added links to various book sellers on the page, they can potentially reap the benefits of the long tail, by assisting readers to find the more obscure titles in their catalogue. However, their focus is still on selling the physical book. They specifically stated that they do not want to be become booksellers. (Although through their “Digital Media Cafe,” they are experimenting with selling digital content through their website.)
As readers increasingly want to interact with their media and text, a big question remains. Is Harper Collins and the publishing industry ready to release control they traditionally held and reinterpret their purpose? With POD, search engines, emergent communities, we are seeing the formation of new authors, filters, editors and curators. They are playing the roles that publishers once traditional filled. It will be interesting to see how far Harper Collins goes with these initiatives. For instance, Harper Collins also has intentions to start working with myspace and facebook to add links to books on their site. Are they prepared for negative commentary associated with those links? Are they ready to allow people to decide which books get attention?
If traditional publishers do not provide media (including text) in ways we are increasingly accustomed to receiving it, their relevance is at risk. We see them slowly trying to adapt to the shifting expectations and behaviors of people. However, in order to maintain that relevance, they need to deeply rethink what a publisher is today.

a2k wrap-up

Access to knowledge means that the right policies for information and knowledge production can increase both the total production of information and knowledge goods, and can distribute them in a more equitable fashion.
Jack Balkin, from opening plenary

I’m back from the A2K conference. The conference focused on intellectual property regimes and international development issues associated with access to medical, health, science, and technology information. Many of the plenary panels dealt specifically with the international IP regime, currently enshrined in several treaties: WIPO, TRIPS, Berne Convention, (and a few more. More from Ray on those). But many others, instead of relying on the language in the treaties, focused developing new language for advocacy, based on human rights: access to knowledge as an issue of justice and human dignity, not just an issue of intellectual property or infrastructure. The Institute is an advocate of open access, transparency, and sharing, so we have the same mentality as most of the participants, even if we choose to assail the status quo from a grassroots level, rather than the high halls of policy. Most of the discussions and presentations about international IP law were generally outside of the scope of our work, but many of the smaller panels dealt with issues that, for me, illuminated our work in a new light.
In the Peer Production and Education panel, two organizations caught my attention: Taking IT Global and the International Institute for Communication and Development (IICD). Taking IT Global is an international youth community site, notable for its success with cross-cultural projects, and for the fact that it has been translated into seven languages—by volunteers. The IICD trains trainers in Africa. These trainers then go on to help others learn the technological skills necessary to obtain basic information and to empower them to participate in creating information to share.

“What I’m talking about is the fact that ‘global peripheries’ are using technologies to produce their own cultural products and become completely independent from ‘cultural industries.'”
—Ronaldo Lemos

The ideology of empowerment ran thick in the plenary panels. Ronaldo Lemos, in the Political Economy of A2K, dropped a few figures that showed just how powerful communities outside the scope and target of traditional development can be. He talked about communities at the edge, peripheries, that are using technology to transform cultural production. He dropped a few figures that staggered the crowd: last year Hollywood produced 611 films. But Nigeria, a country with only ONE movie theater (in the whole nation!) released 1200 films. To answer the question of how? No copyright law, inexpensive technology, and low budgets (to say the least). He also mentioned the music industry in Brazil, where cultural production through mainstream corporations is about 52 CDs of Brazilian artists in all genres. In the favelas they are releasing about 400 albums a year. It’s cheaper, and it’s what they want to hear (mostly baile funk).
We also heard the empowerment theme and A2K as “a demand of justice” from Jack Balkin, Yochai Benkler, Nagla Rizk, from Egypt, and from John Howkins, who framed the A2K movement as primarily an issue of freedom to be creative.
The panel on Wireless ICT’s (and the accompanying wiki page) made it abundantly obvious that access isn’t only abut IP law and treaties: it’s also about physical access, computing capacity, and training. This was a continuation of the Network Neutrality panel, and carried through later with a rousing presentation by Onno W. Purbo, on how he has been teaching people to “steal” the last mile infrastructure from the frequencies in the air.
Finally, I went to the Role of Libraries in A2K panel. The panelists spoke on several different topics which were familiar territory for us at the Institute: the role of commercialized information intermediaries (Google, Amazon), fair use exemptions for digital media (including video and audio), the need for Open Access (we only have 15% of peer-reviewed journals available openly), ways to advocate for increased access, better archiving, and enabling A2K in developing countries through libraries.

Human rights call on us to ensure that everyone can create, access, use and share information and knowledge, enabling individuals, communities and societies to achieve their full potential.
The Adelphi Charter

The name of the movement, Access to Knowledge, was chosen because, at the highest levels of international politics, it was the one phrase that everyone supported and no one opposed. It is an undeniable umbrella movement, under which different channels of activism, across multiple disciplines, can marshal their strength. The panelists raised important issues about development and capacity, but with a focus on human rights, justice, and dignity through participation. It was challenging, but reinvigorating, to hear some of our own rhetoric at the Institute repeated in the context of this much larger movement. We at the Institute are concerned with the uses of technology whether that is in the US or internationally, and we’ll continue, in our own way, to embrace development with the goal of creating a future where technology serves to enable human dignity, creativity, and participation.

privacy matters

In a recent post, Susan Crawford magisterially weaves together a number of seemingly disparate strands into a disturbing picture of the future of privacy, first looking at the still under-appreciated vulnerability of social networking sites. Recently ratcheted-up scrutiny on MySpace and other similar episodes suggest to Crawford that some sort of privacy backlash is imminent — a backlash, however, that may come too late.
The “too late” part concerns the all too likely event of a revised Telecommunications bill that will give internet service providers unprecedented control over what data flows through their pipes, and at what speed:

…all of the privacy-related energy directed at the application layer (at social networks and portals and search engines) may be missing the point. The real story in this country about privacy will be at a lower layer – at the transport layer of the internet. The pipes. The people who run the pipes, and particularly the last mile of those pipes, are anxious to know as much as possible about their users. And many other incumbents want this information too, like law enforcement and content owners. They’re all interested in being able to look at packets as they go by their routers, something that doesn’t traditionally happen on the traditional internet.
…and looking at them makes it possible for much more information to be available. Cisco, in particular, has a strategy it calls the “self-defending network,” which boils down to tracking much more information about who’s doing what. All of this plays on our desires for security – everyone wants a much more secure network, right?

Imagine an internet without spam. Sounds great, but at what price? Manhattan is a lot safer these days (for white people at least) but we know how Giuliani pulled that one off. By talking softly and carrying a big broom; the Disneyfication of Times Square etc. In some ways, Times Square is the perfect analogy for what America’s net could become if deregulated.
times square.jpg
And we don’t need to wait for Congress for the deregulation to begin. Verizon was recently granted exemption from rules governing business broadband service (price controls and mandated network-sharing with competitors) when a deadline passed for the FCC to vote on a 2004 petition from Verizon to entirely deregulate its operations. It’s hard to imagine how such a petition must have read:

“Dear FCC, please deregulate everything. Thanks. –Verizon”

And harder still to imagine that such a request could be even partially granted simply because the FCC was slow to come to a decision. These people must be laughing very hard in a room very high up in a building somewhere. Probably Times Square.
Last month, when a federal judge ordered Google to surrender a sizable chunk of (anonymous) search data to the Department of Justice, the public outcry was predictable. People don’t like it when the government starts snooping, treading on their civil liberties, hence the ongoing kerfuffle over wiretapping. What fewer question is whether Google should have all this information in the first place. Crawford picks up on this:

…three things are working together here, a toxic combination of a view of the presidency as being beyond the law, a view by citizens that the internet is somehow “safe,” and collaborating intermediaries who possess enormous amounts of data.
The recent Google subpoena case fits here as well. Again, the government was seeking a lot of data to help it prove a case, and trying to argue that Google was essential to its argument. Google justly was applauded for resisting the subpoena, but the case is something of a double-edged sword. It made people realize just how much Google has on hand. It isn’t really a privacy case, because all that was sought were search terms and URLS stored by Google — no personally-identifiable information. But still this case sounds an alarm bell in the night.

New tools may be in the works that help us better manage our online identities, and we should demand that networking sites, banks, retailers and all the others that handle our vital stats be more up front about their procedures and give us ample opportunity to opt out of certain parts of the data-mining scheme. But the question of pipes seems to trump much of this. How to keep track of the layers…
Another layer coming soon to an internet near you: network data storage. Online services that do the job of our hard drives, storing and backing up thousands of gigabytes of material that we can then access from anywhere. When this becomes cheap and widespread, it might be more than our identities that’s getting snooped.
Amazon’s new S3 service charges 15 cents per gigabyte per month, and 20 cents per data transfer. To the frequently asked question “how secure is my data?” they reply:

Amazon S3 uses proven cryptographic methods to authenticate users. It is your choice to keep your data private, or to make it publicly accessible by third parties. If you would like extra security, there is no restriction on encrypting your data before storing it in S3.

Yes, it’s our choice. But what if those third parties come armed with a court order?

the social life of books

One of the most exciting things about Sophie, the open-source software the institute is currently developing, is that it will enable readers and writers to have conversations inside of books — both live chats and asynchronous exchanges through comments and social annotation. I touched on this idea of books as social software in my most recent “The Book is Reading You” post, and we’re exploring it right now through our networked book experiments with authors Mitch Stephens, and soon, McKenzie Wark, both of whom are writing books and opening up the process (with a little help from us) to readers. It’s a big part of our thinking here at the institute.
Catching up with some backlogged blog reading, I came across a little something from David Weinberger that suggests he shares our enthusiasm:

I can’t wait until we’re all reading on e-books. Because they’ll be networked, reading will become social. Book clubs will be continuous, global, ubiquitous, and as diverse as the Web.
And just think of being an author who gets to see which sections readers are underlining and scribbling next to. Just think of being an author given permission to reply.
I can’t wait.

Of course, ebooks as currently envisioned by Google and Amazon, bolted into restrictive IP enclosures, won’t allow for this kind of exchange. That’s why we need to be thinking hard right now about an alternative electronic publishing system. It may seem premature to say this — now, when electronic books are a marginal form — but before we know it, these companies will be the main purveyors of all media, including books, and we’ll wonder what the hell happened.

academic publishing as “gift culture”

John Holbo has an excellent piece up on the Valve that very convincingly argues the need to reinvent scholarly publishing as a digital, networked system. John will be attending a meeting we’ve organized in April to discuss the possible formation of an electronic press — read his post and you’ll see why we’ve invited him.
It was particularly encouraging, in light of recent discussion here, to see John clearly grasp the need for academics to step up to the plate and take into their own hands the development of scholarly resources on the web — now more than ever, as Google, Amazon are moving more aggressively to define how we find and read documents online:

…it seems to me the way for academic publishing to distinguish itself as an excellent form – in the age of google – is by becoming a bastion of ‘free culture’ in a way that google book won’t. We live in a world of Amazon ‘search inside’, but also of copyright extension and, in general, excessive I.P. enclosures. The groves of academe are well suited to be exemplary Creative Commons. But there is no guarantee they will be. So we should work for that.

googlezon and the publishing industry: a defining moment for books?

Yesterday Roger Sperberg made a thoughtful comment on my latest Google Books post in which he articulated (more precisely than I was able to do) the causes and potential consequences of the publisher’s quest for control. I’m working through these ideas with the thought of possibly writing an article, so I’m reposting my response (with a few additions) here. Would appreciate any feedback…
What’s interesting is how the Google/Amazon move into online books recapitulates the first flurry of ebook speculation in the mid-to-late 90s. At that time, the discussion was all about ebook reading devices, but then as now, the publish industry’s pursuit of legal and techological control of digital books seemed to bring with it a corresponding struggle for control over the definition of digital books — i.e. what is the book going to become in the digital age? The word “ebook” — generally understood as a digital version of a print book — is itself part of this legacy of trying to stablize the definition of books amid massively destablizing change. Of course the problem with this is that it throws up all sorts of walls — literal and conceptual — that close off avenues of innovation and rob books of much of their potential enrichment in the electronic environment.
cliffordlynch.jpg Clifford Lynch described this well in his important 2001 essay “The Battle to Define to Define the Future of the Book in the Digital World”:

…e-book readers may be the price that the publishing industry imposes, or tries to impose, on consumers, as part of the bargain that will make large numbers of interesting works available in electronic form. As a by-product, they may well constrain the widespread acceptance of the new genres of digital books and the extent to which they will be thought of as part of the canon of respectable digital “printed” works.

A similar bargain is being struck now between publishers and two of the great architects of the internet: Google and Amazon. Naturally, they accept the publishers’ uninspired definition of electronic books — highly restricted digital facsimiles of print books — since it guarantees them the most profit now. But it points in the long run to a malnourished digital culture (and maybe, paradoxically, the persistence of print? since paper books can’t be regulated so devilishly).
As these companies come of age, they behave less and less like the upstart innovators they originally were, and more like the big corporations they’ve become. We see their grand vision (especially Google’s) contract as the focus turns to near-term success and the fluctuations of stock. It creates a weird paradox: Google Book Search totally revolutionizes the way we search and find connections between books, but amounts to a huge setback in the way we read them.
(For those of you interested in reading Lynch’s full essay, there’s a TK3 version that is far more comfortable to read than the basic online text. Click the image above or go here to download. You’ll have to download the free TK3 Reader first, which takes about 10 seconds. Everything can be found at the above link).

blu-ray, amazon, and our mediated technology dependent lives

A couple of recent technology news items got me thinking about media and proprietary hardware. One was the New York Times report of Sony’s problems with its HD-DVD technology, Blu-Ray, which is causing them to delay the release of their next gaming system, the PS3. The other item was Amazon’s intention of entering the music subscription business in the Wall Street Journal.
The New York Times gives a good overview on the up coming battle of hardware formats for the next generation of high definition DVD players. It is the Betamax VHS war from the 80s all over again. This time around Sony’s more expensive / more capacity standard is pitted against Toshiba’s cheaper but limited HD-DVD standard. It is hard to predict an obvious winner, as Blu-Ray’s front runner position has been weaken by the release delays (implying some technical challenges) and the recent backing of Toshiba’s standard by Microsoft (and with them, ally Intel follows.) Last time around, Sony also bet on the similarly better but more expensive Betamax technology and lost as consumers preferred the cheaper, lesser quality of VHS. Sony is investing a lot in their Blu-Ray technology, as the PS3 will be founded upon Blu-Ray. The standards battle in the move from VHS to DVD was avoided because Sony and Philips decided to scrap their individual plans of releasing a DVD standard and they agreed to share in the revenue of licensing of the Toshiba / Warner Brothers standard. However, Sony feels that creating format standards is an area of consumer electronics where they can and should dominate. Competing standards is nothing new, and date back to at least to the decision of AC versus DC electrical current. (Edison’s preferred DC lost out to Westinghouses’ AC.) Although, it does provide confusion for consumers who must decide which technology to invest in, with the potential danger that it may become obsolete in a few years.
On another front, Amazon also recently announced their plans to release their own music player. In this sphere, Amazon is looking to compete with iTunes and Apple’s dominance in the music downloading sector. Initially, Apple surprised everyone with the foray into the music player and download market. What was even more surprising was they were able to pull it off, shown by their recent celebration of the 1 billionth downloaded song. Apple continues to command the largest market share, while warding off attempts from the likes of Walmart (the largest brick and mortar music retailer in the US.) Amazon is pursuing a subscription based model, sensing that Napster has failed to gain much traction. Because Amazon customers already pay for music, they will avoid Napster’s difficult challenge of convincing their millions of previous users to start paying for a service that they once had for free, albeit illegally. Amazon’s challenge will be to persuade people to rent their music from Amazon, rather than buy it outright. Both Real and Napster only have a fraction of Apple’s customers, however the subscription model does have higher profit margins than the pay per song of iTunes.
It is a logical step for Amazon, who sells large numbers of CDs, DVDs and portable music devices (including iPods.) As more people download music, Amazon realizes that it needs to protect its markets. In Amazon’s scheme, users can download as much music as they want, however, if they cancel their subscription, the music will no longer play on their devices. The model tests to see if people are willing to rent their music, just like they rent DVDs from Netflix or borrow books from the library. I would feel troubled if I didn’t outright own my music, however, I can see the benefits of subscribing to access music and then buying the songs that I liked. However, it appears that if you will not be able to store and play your own MP3s on the Amazon player and the iPod will certainly not be able to use Amazon’s service. Amazon and partner Samsung must create a device compelling enough for consumers drop their iPods. Because the iPod will not be compatible with Amazon’s service, Amazon may be forced to sell the players at heavy discounts or give them to subscribers for free, in a similar fashion to the cell phone business model. The subscription music download services have yet to create a player with any kind of social or technical cachet comparable to the cultural phenomenon of the iPod. Thus, the design bar has been set quite high for Amazon and Samsung. Amazon’s intentions highlight the issue of proprietary content and playback devices.
While all these companies jockey for position in the marketplace, there is little discussion on the relationship between wedding content to a particular player or reader. Print, painting, and photography do not rely on a separate device, in that the content and the displayer of the content, in other words the vessel, are the same thing. In the last century, the vessel and the content of media started to become discreet entities. With the development of transmitted media of recorded sound, film and television, content required a player and different manufacturers could produce vessels to play the content. Further, these new vessels inevitably require electricity. However, standards were formed so that a television could play any channel and the FM radio could play any FM station. Because technology is developing at a much faster rate, the battle for standards occur more frequently. Vinyl records reigned for decades where as CDs dominated for about ten years before MP3s came along. Today, a handful of new music compression formats are vying to replace MP3. Furthermore, companies from Microsoft and Adobe to Sony and Apple appear more willing to create proprietary formats which require their software or hardware to access content.
As more information and media (and in a sense, ourselves) migrate to digital forms, our reliance on often proprietary software and hardware for viewing and storage grows steadily. This fundamental shift on the ownership and control of content radically changes our relationship to media and these change receive little attention. We must be conscious of the implied and explicit contracts we agree to, as information we produce and consume is increasingly mediated through technology. Similarly, as companies develop vertical integration business models, they enter into media production, delivery, storage and playback. These business models create the temptation to start creating to their own content, and perhaps give preferential treatment to their internally produced media. (Amazon also has plans to produce and broadcast an Internet show with Bill Maher and various guests.) Both Amazon and Blu-Ray HD-DVD are just current examples content being tied to proprietary hardware. If information wants to be free, perhaps part of that freedom involves being independent from hardware and software.

the book is reading you

I just noticed that Google Book Search requires users to be logged in on a Google account to view pages of copyrighted works.
google book search account.jpg
They provide the following explanation:

Why do I have to log in to see certain pages?
Because many of the books in Google Book Search are still under copyright, we limit the amount of a book that a user can see. In order to enforce these limits, we make some pages available only after you log in to an existing Google Account (such as a Gmail account) or create a new one. The aim of Google Book Search is to help you discover books, not read them cover to cover, so you may not be able to see every page you’re interested in.

So they’re tracking how much we’ve looked at and capping our number of page views. Presumably a bone tossed to publishers, who I’m sure will continue suing Google all the same (more on this here). There’s also the possibility that publishers have requested information on who’s looking at their books — geographical breakdowns and stats on click-throughs to retailers and libraries. I doubt, though, that Google would share this sort of user data. Substantial privacy issues aside, that’s valuable information they want to keep for themselves.
That’s because “the aim of Google Book Search” is also to discover who you are. It’s capturing your clickstreams, analyzing what you’ve searched and the terms you’ve used to get there. The book is reading you. Substantial privacy issues aside, (it seems more and more that’s where we’ll be leaving them) Google will use this data to refine Google’s search algorithms and, who knows, might even develop some sort of personalized recommendation system similar to Amazon’s — you know, where the computer lists other titles that might interest you based on what you’ve read, bought or browsed in the past (a system that works only if you are logged in). It’s possible Google is thinking of Book Search as the cornerstone of a larger venture that could compete with Amazon.
There are many ways Google could eventually capitalize on its books database — that is, beyond the contextual advertising that is currently its main source of revenue. It might turn the scanned texts into readable editions, hammer out licensing agreements with publishers, and become the world’s biggest ebook store. It could start a print-on-demand service — a Xerox machine on steroids (and the return of Google Print?). It could work out deals with publishers to sell access to complete online editions — a searchable text to go along with the physical book — as Amazon announced it will do with its Upgrade service. Or it could start selling sections of books — individual pages, chapters etc. — as Amazon has also planned to do with its Pages program.
Amazon has long served as a valuable research tool for books in print, so much so that some university library systems are now emulating it. Recent additions to the Search Inside the Book program such as concordances, interlinked citations, and statistically improbable phrases (where distinctive terms in the book act as machine-generated tags) are especially fun to play with. Although first and foremost a retailer, Amazon feels more and more like a search system every day (and its A9 engine, though seemingly always on the back burner, is also developing some interesting features). On the flip side Google, though a search system, could start feeling more like a retailer. In either case, you’ll have to log in first.

exploring the book-blog nexus

It appears that Amazon is going to start hosting blogs for authors. Sort of. Amazon Connect, a new free service designed to boost sales and readership, will host what are essentially stripped-down blogs where registered authors can post announcements, news and general musings. amazon connect.jpg Eventually, customers can keep track of individual writers by subscribing to bulletins that collect in an aggregated “plog” stream on their Amazon home page. But comments and RSS feeds — two of the most popular features of blogs — will not be supported. Engagement with readers will be strictly one-way, and connection to the larger blogosphere basically nil. A missed opportunity if you ask me.
Then again, Amazon probably figured it would be a misapplication of resources to establish a whole new province of blogland. This is more like the special events department of a book store — arranging readings, book singings and the like. There has on occasion, however, been some entertaining author-public interaction in Amazon’s reader reviews, most famously Anne Rice’s lashing out at readers for their chilly reception of her novel Blood Canticle (link – scroll down to first review). But evidently Connect blogs are not aimed at sparking this sort of exchange. Genuine literary commotion will have to occur in the nooks and crannies of Amazon’s architecture.
It’s interesting, though, to see this happening just as our own book-blog experiment, Without Gods, is getting underway. Over the past few weeks, Mitchell Stephens has been writing a blog (hosted by the institute) as a way of publicly stoking the fire of his latest book project, a narrative history of atheism to be published next year by Carroll and Graf. While Amazon’s blogs are mainly for PR purposes, our project seeks to foster a more substantive relationship between Mitch and his readers (though, naturally, Mitch and his publisher hope it will have a favorable effect on sales as well). We announced Without Gods a little over two weeks ago and already it has collected well over 100 comments, a high percentage of which are thoughtful and useful.
We are curious to learn how blogging will impact the process of writing the book. By working partially in the open, Mitch in effect raises the stakes of his research — assumptions will be challenged and theses tested. Our hunch isn’t so much that this procedure would be ideal for all books or authors, but that for certain ones it might yield some tangible benefit, whether due to the nature or breadth of their subject, the stage they’re at in their thinking, or simply a desire to try something new.
An example. This past week, Mitch posted a very thinking-out-loud sort of entry on “a positive idea of atheism” in which he wrestles with Nietzsche and the concepts of void and nothingness. This led to a brief exchange in the comment stream where a reader recommended that Mitch investigate the writings of Gora, a self-avowed atheist and figure in the Indian independence movement in the 30s. Apparently, Gora wrote what sounds like a very intriguing memoir of his meeting with Gandhi (whom he greatly admired) and his various struggles with the religious component of the great leader’s philosophy. Mitch had not previously been acquainted with Gora or his writings, but thanks to the blog and the community that has begun to form around it, he now knows to take a look.
What’s more, Mitch is currently traveling in India, so this could not have come at a more appropriate time. It’s possible that the commenter had noted this from a previous post, which may have helped trigger the Gora association in his mind. Regardless, these are the sorts of the serendipitous discoveries one craves while writing book. I’m thrilled to see the blog making connections where none previously existed.