I’ve noticed something today after a post over at webmaster-talk.com. Apparently Feedburner chooses to show two different types of feeds even under the same URL. I’m not sure what it was that triggers one or the other, or how to even fix it, but I have noticed that one of the source code feeds is in HTML whereas the other is in XML. Is this an error in Feedburner and what makes it determine which one to show? Anyone know?
Google has shocked everyone with a random pagerank update a couple of days ago. This site got boosted up to a two (yay). If you need to check your pagerank, you can install the Google Toolbar and select the Pagerank option. Pagerank is just a measure of how many sites are linking to your site. Do a search and you will find tons of stuff!
Ok, so today I’ve been checking out feeds, you know, the ones that your website or blog software might automatically create, yet you never know what to do with them 🙂 So anyways, I wanted to see how many readers I currently have at a couple of sites. Using a popular service called Feedburner, I’ve managed to create new links, and also reroute the old links to the new Feedburner ones. Also you can get an update of how many readers you currently might have (a useful feature). I’ll see how it goes and keep playing around with it (there are a lot of settings).
I’ve already burned JesusDevotionals feed at Feed. This is really a good program, (Feedburner), I don’t like the idea of losing control of my URL (which you can set to redirect), but I guess it’s the only way. I’ll let you know how it goes. Next to come, Top Country Videos feed!
Check out this site for more info. Rss Feeds
A couple of security fixes but nothing major. It should upload and install once opened today (if you have automatic updates on). I guess the bigger news is the newer release of IE8 😉
Ok, so I’m taking my time and slowly, (sometimes painfully but joyfully) learning the Apache Mod_Rewrite rules. Wish me luck!
Hopefully I can get this working!
I’ve found this great tool for creating a Sitemap, quickly (well depending on the number of URLs you have), and fairly easily (after overcoming the learning curve) with a good amount of options.
Unlike online Sitemap tools which usually limit you to 500 or so pages, which can be not enough especially if you have a blog with many posts, GSiteCrawler is downloadable and runs off your computer, sending a network of spiders to fetch your site information.
You can customize settings, such as URLs not to crawl, parameters to drop, etc. and the options for outputting files after finishing a crawl is quite extravagant (many, many different options including CSV files and GZip files). There are other programs you can buy, but this one is definitely getting the job done.
You can download a copy free (opensource) at http://gsitecrawler.com/en/download/. Happy crawling!
Ask.com is now claiming to follow in Google’s recent footsteps of obeying a canonical url tag to claim a base page for any pages on a site. Looks like Ask is still alive and kicking in the search engine marketplace. More can be found here. http://www.webmasterworld.com/ask_jeeves_teoma/3856744.htm God bless.