EL HOSTSALE YA ESTÁ AQUÍ - HASTA 80% DCTO Y ENVÍO GRATIS   Ver más

menú

0
  • argentina
  • chile
  • colombia
  • españa
  • méxico
  • perú
  • estados unidos
  • internacional
Envío gratis
portada Enhancement in Web Crawler using Weighted Page Rank Algorithm based on VOL: Extended Architecture of Web Crawler (en Inglés)
Formato
Libro Físico
Editorial
Idioma
Inglés
N° páginas
98
Encuadernación
Tapa Blanda
Dimensiones
21.0 x 14.8 x 0.6 cm
Peso
0.14 kg.
ISBN13
9783656700043

Enhancement in Web Crawler using Weighted Page Rank Algorithm based on VOL: Extended Architecture of Web Crawler (en Inglés)

Gupta, Sachin (Autor) · Grin Verlag · Tapa Blanda

Enhancement in Web Crawler using Weighted Page Rank Algorithm based on VOL: Extended Architecture of Web Crawler (en Inglés) - Gupta, Sachin

Libro Nuevo

$ 1,296.63

$ 2,161.04

Ahorras: $ 864.42

40% descuento
  • Estado: Nuevo
  • Quedan 78 unidades
Origen: Estados Unidos (Costos de importación incluídos en el precio)
Se enviará desde nuestra bodega entre el Jueves 04 de Julio y el Miércoles 17 de Julio.
Lo recibirás en cualquier lugar de México entre 1 y 3 días hábiles luego del envío.

Reseña del libro "Enhancement in Web Crawler using Weighted Page Rank Algorithm based on VOL: Extended Architecture of Web Crawler (en Inglés)"

Master's Thesis from the year 2014 in the subject Computer Science - Technical Computer Science, course: M.Tech, language: English, abstract: As the World Wide Web is growing rapidly day by day, the number of web pages is increasing into millions and trillions around the world. To make searching much easier for users, search engines came into existence. Web search engines are used to find specific information on the WWW. Without search engines, it would be almost impossible for us to locate anything on the Web unless or until we know a specific URL address. Every search engine maintains a central repository or databases of HTML documents in indexed form. Whenever a user query comes, searching is performed within that database of indexed web pages. The size of repository of every search engine can't accommodate each and every page available on the WWW. So it is desired that only the most relevant and important pages are stored in the database to increase the efficiency of search engines. This database of HTML documents is maintained by special software called "Crawler". A Crawler is software that traverses the web and downloads web pages. Broad search engines as well as many more specialized search tools rely on web crawlers to acquire large collections of pages for indexing and analysis. Since the Web is a distributed, dynamic and rapidly growing information resource, a crawler cannot download all pages. It is almost impossible for crawlers to crawl the whole web pages from World Wide Web. Crawlers crawls only fraction of web pages from World Wide Web. So a crawler should observe that the fraction of pages crawled must be most relevant and the most important ones, not just random pages. In our Work, we propose an extended architecture of web crawler of search engine, to crawl only relevant and important pages from WWW, which will lead to reduced sever overheads. With our proposed architecture we will also be optimizing the crawled data by removing least or neve

Opiniones del libro

Ver más opiniones de clientes
  • 0% (0)
  • 0% (0)
  • 0% (0)
  • 0% (0)
  • 0% (0)

Preguntas frecuentes sobre el libro

Todos los libros de nuestro catálogo son Originales.
El libro está escrito en Inglés.
La encuadernación de esta edición es Tapa Blanda.

Preguntas y respuestas sobre el libro

¿Tienes una pregunta sobre el libro? Inicia sesión para poder agregar tu propia pregunta.

Opiniones sobre Buscalibre

Ver más opiniones de clientes