Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Saturday, April 5, 2008

Upload your publications to zkimmer TODAY

Later today, we are going to open the doors on our new upload site that will allow anybody with a PDF or consequitavly numbered JPEGs to publish in zkimmer format.... we hope you think it's cool.

The site will be swapped over to tomorrow afternoon PST. Come and try it out.

Thanks for visiting.

Saturday, March 29, 2008

zkimmer leaves Google API for OpenLayers

Today we decided to move from Google's Map API to OpenLayers and started to even tinker with developing our own proprietary tiling technology. The more we invest in tweaking the user experience around the specific needs of online documents, the more apparent it is that we need to make major changes to the standard map-type tiling user experience.

Specific things we are doing include:
* zlayers for text and positional Google text search/ location.
* page by page and direct to page number navigation
* shift and hold rectangle drawing that allows user to fill screen with the selected area. Amongst others...

We tried to hand on and use the Google API due to the large user base and the instant recognition but the cons are outweighing th pros to continue. Openlayers here we come.

Tuesday, February 19, 2008

New concept: Right-click any word to research a purchase

Way back around 99, I worked with IP law firm Akin Gump to try for a patent in the area of bridging supplier web sites (like per se) with retailer sites (such as so that anyone looking at a supplier site could be easily linked to a retailer site without the need for the supplier setting up a retailer database, or the retailer having to convince the supplier to link to them... in my ideal world, a trusted accounting firm such as Ernst&Young would run the site and all the suppliers in the world would have a little icon attached to every product page that would link to the E&Y database that in turn linked to a list of retailers search-able by price and distance.

On the retailers side all they needed to do was publish their inventory to E&Y as a backup of their existing inventory control system and voila... an internet surfer could look at the Lays website for the flavor potato chips they like and then with one click know the nearest 711 that had it in stock and what price they were asking.

That idea got shelved.. until this morning.

Laying in bed at 5.30 I started thinking about how to sidestep the whole Google search/ advertising monopoly and tap into the communal collective that has so far been so effective in places like blogs, wikipedia and ebays rating system...

So here's the idea.

A user is reading an article that mentions a product name or keyword that they are interested in investigating with an aim to purchasing. They simple double click to select the keyword or drag to select the relevant expression and then right-click.

What appears on the right-click menu is a function called shop4this that appears because the user has loaded a browser extension to add this capability at some previous time. Upon selecting shop4this the word or selection is copied to memory along with a snapshot of the surrounding text so that an intelligent anticipation can be made of what the user is interested in researching to buy.

The user is taken to the web site. Why a dot-net rather than a dot-com site? Well firstly because the dot-com site was already owned, but also because the concept behind the idea is a network of people, not a single entity/ company. Anyway I digress...

The shop4this web site is displayed in a new window or tab and shows the user the best results for the search it can find. Options are new, used or both and to search by price or location assuming the user has supplied their zip code as a reference.

This is where it gets interesting... if there are no entries (or not the right kind of entries) for the item, the user is invited to share their research with the other users of the shop4this site in return for credits generated by advertising income from the site. So the agrees to go into monitor mode (which I haven't worked out exactly the best way to do yet) and searches the web until they find a good solution. Then, possibly using a right-click selection again, the user links their original search phrase with their desired found result for others to use and for them to get credit for finding.

Using a reputation system, and self editing along with a reward system that sees 50% of the advertising revenue return to searcher and the other people who use his or her search results, this thing may have some legs.

Business Model
Of course advertising can play a part in this. Google Ads-ish sidebar text ads etc are a no-brainer. But sellers on the web can also sponsor their links from the shop4this site. The links would be highlighted as sponsored so that people know they will get credits for following that link compared to others. So the site makes money three ways.
  1. Direct competitive advertising a-la Google
  2. Indirect/ related advertising a-la-Google. and...
  3. Link sponsership where a portion of the retailers sales are shared back with the searcher who originally found the item/ search link and the followers who followed the path to the retailer.
How about these metrics:
  • A web site owner pays $20 per month to sponser links to their product pages. The sponsorship includes up to 2000 pass thru links and up to 100 products.
  • The shop4this site keeps a flat 20% for themselves for operations
  • The original searcher links to one product on the sponsers page. At this time there are 50 other searchers who have linked to various other products/ pages on the sponsers web site.
  • The searchers get to share 50% of 80% ie 40% of revenue from the site owners $20.
  • The other 40% is shared amongst the followers who use the link.
  • So if 50 searchers share the 40% at a ratio equal to the number of total visitors that used the searchers link (ie one searchers link represents 20% of the sites link traffic) the searcher would get 20% of 40% of $20 (or $1.60) in credits.
  • When the credits hit $30 they get a check or a Paypal credit. Plus they get 60 days to claim.
  • When a follower links to the page for the first time, and a total of 1000 hits link to the site over the month, they receive as credits 1/1000th of 40% of $20 which is 0.8cents for the visit.
What do you think? Has this got legs... why not leave a comment to say whether you would try something like this...

Note: Ric is actively working on this project with a view to securing a patent for some or all of the ideas expressed in this blog entry.

Thursday, January 31, 2008

Google Search’s Achilles Heal

After reading the first few chapters of the book Search, it is now dawning on me how truly innovative the Google Search algorithm is. The whole analogy of treating web pages as if they are technical whitepapers with their citations and references is a true masterstroke.

But therein also is its weakness.

Over the next few weeks I intend to test some base theories about how Google does its ranking. One test is to involve one of our latest projects called zkimmer. Theoretically this is a graphics based mapping engine that has been adapted for publication display… I will be testing my theories ability to make specific zkimmer publications climb quickly up the Google Page ranking list.

I will also experiment with the name of a person that has allowed me to experiment with his mention on Google page ranking.

If these two experiments work out successfully then I fully intend to capitalize on it with the zkimmer publication list and then role out a separate standalone company to capitalize on the discovery. Come back in two weeks for updates.

Relevant research links include the original Google Algorithm white paper by the Google team called The Anatomy of a large-scale Hypertextual Web Search Engine. Further research uncovered this gem about citations in Wikipedia and how to reverse engineer citations in that incredible resource.

Followup projects for this entry:
  1. Find out how to publish in an educational whitepaper database like
  2. A basic test to see how citations boost pagerank... possibly by using a citation engine that looks for appropriate highly ranked reference points within key words for a given web site.
Web Statistics