Wednesday, June 19, 2013

Helix Review: Is It Worth It?


A new marketing tool has recently entered the world of indie publishing. Helix Review is similar to the Genome Project used by Pandora in generating music playlists based on archetype and genre. I decided to go ahead and splurge on the review for my novel Extreme Unction, and in my personal opinion, that’s $50 I’ll never see again. The sad thing is, it could have actually been a useful tool, but it’s not.
What Helix Review does is it scans your manuscript for such factors as keywords, style metrics, comparable titles and story DNA (the amount of focus your book puts on certain themes and issues compared to other books in the genre and books in general.) The author then supposedly has a template for how his/her story fits into the genre and the publishing world in general so that he/she can better target his/her marketing. Sounds great. The thing is I was basically told next to nothing in my review that actually helps me target my story to anyone.

Word Cloud showing my unique word usage per the Helix Project
 Review of Extreme Unction
Let’s start with the comparable titles. According to Helix, the book in my genre which my story is most like is The Sodoku Puzzle Murders by Parnell Hall. Other books mine supposedly compares to include With This Puzzle, I Thee Kill by Parnell Hall, oh, and the book mine is most like considering all genres is The Sodoku Puzzle Murders by Parnell Hall. Now to be fair, they did mention a few other authors, but the only one I’d even heard of was Sandra Brown.
Not that I expected to be compared to Clive Cussler, Dashiell Hammett or Rex Stout, but how am I supposed to market a hardboiled mystery involving a car-loving atheist in the Holmsian mold to fans of cozies about a word-puzzle loving old lady in the Miss Marple vein?
Let’s move on to the writing style metric which measures for motion, density, dialog, description and pacing. The person submitting to the review is asked to supply the name of a novel from the genome database to use for direct comparison, and the average ranges for all stories in the genre are also graphed. Under motion (as an example) the average range on a scale of 1 to 100 in the genre of mystery and detective seems to fall between 50 and 70. I scored a 56 and the book I gave for direct comparison, The Doorbell Rang by Rex Stout scored around a 54. However, the normal range for pacing in the genre seems to be between 25 and 50, but I scored only an 18, which would be worrisome except that TDR scored even lower, and I remember that as one of the most briskly paced books in the series. I’m not saying the information isn’t helpful or enlightening. I’m just saying that from a marketing standpoint, it isn’t very useful.
So let’s take a look at the story DNA. Here they give some examples of the DNA of other stories and then graph the subject’s story for comparison. My story which involves a journalist investigating a crime scored fairly high on the category, newspaper reporting/journalism, which is to be expected. However, the crime involves a Catholic priest accused of euthanizing a parishioner while administering the sacrament of last rites. I spend a great deal of time in the story describing the ritual, as well as other Catholic rituals such as confession and penance. One might expect that I scored fairly high on the category Church services/religious worship. However the bar shows only around 10% compared to all other mystery/detective novels. Meanwhile the example they give for comparison, The Da Vinci Code, scored around an 80%. In order for that to be correct, it seems to me the majority of books published in the genre would have to score at close to zero percent which seems unlikely.
Yet, the thing about this that bothers me the most is that it still could have been so much more helpful. Submitting to the project does not get your book included into the database to affect future outcomes. In other words, independent publishers like myself have no impact on the data generated. It’s still a standard/traditional publisher’s field we’re being compared against.
Moreover, nobody is using the results to affect recommendations. Consider that when you use iheart or Pandora, you will occasionally be offered an unknown artist who performs in that same musical milieu. When one buys a book on Amazon, one is shown titles that others who bought that book also purchased. Why not create recommendations on sites like Amazon based on similar or comparable books in the Helix database? And why not include independently published books as well; especially since we are paying to have our books thus compared?
Anyway, that’s my experience and my opinion. Take it for whatever it is worth to you. However, if you asked my advice, it would be to hold off on having a Helix review of your work until they change some of their metrics and include our works in the database. After all, it’s not a judgment on the quality of our story telling, so why not include us?

No comments:

Post a Comment