这厮

observing

  BlogJava :: 首页 :: 联系 :: 聚合  :: 管理
  48 Posts :: 3 Stories :: 3 Comments :: 0 Trackbacks

#

posted @ 2012-04-08 00:21 cnbarry 阅读(286) | 评论 (0)编辑 收藏

Money site - > tie1 - > SB blast(blog comments)
But this is far from enough
plus daily posting on forum/blogging/bookmark/article..
posted @ 2012-04-04 15:29 cnbarry 阅读(377) | 评论 (0)编辑 收藏

都说,机器他妈是人,机器木有感情,人如果说:这个人像机器一样,那就是说他呆滞没有感情,但是,我觉得家里的锚、路由和我的电脑是有感情的!这几天一直在为家里上网的事给郁闷着, 插上网线的时候遇到一个千古难闻的怪事。这个锚, 它要让人站着! 不然它就会把ADSL灯熄灭, 意思就是不给上网! 我只要一坐下来,灯就灭,一站起来,它就又好了!!!!一坐它又熄灭,一站它就好! 我就歇一会儿站一会儿的, 跟它玩了无数次,次次如此,无一例外!!!  郁闷ing! 其实也不是一定不能坐下, 但如果要坐的话,我就必须要把电脑和锚放得很近(不是用无线的哦),而且必须脸部高于锚和电脑!一定要俯视它们! 这样的问题居然都会发生, 我在想是不是锚这样的东西也需要人来表示诚意呢? 实在是想不通, 锚也是刚换的, 路由器也没有问题, 就是要用户使用的时候俯视着使用!!!我的天呐!我该说什么呢!!!
 
posted @ 2012-03-11 18:08 cnbarry 阅读(187) | 评论 (0)编辑 收藏

    只有注册用户登录后才能阅读该文。阅读全文
posted @ 2012-02-22 11:26 cnbarry 阅读(50) | 评论 (0)编辑 收藏

Default Ports:

  • SMTP AUTH: Port 25 or 587 (some ISPs are blocking port 25)
  • SMTP StartTLS Port 587
  • SMTP SSL Port 465
  • POP Port 110
  • POP SSL Port 995

SMTP Server (Outgoing Messages)


POP3 Server (Incoming Messages)

Googlemail/Gmail SMTP POP3 Server
smtp.gmail.com
SSL Port 465
StartTLS Port 587
pop.gmail.com
SSL Port 995
Please make sure, that POP3 access is enabled in the account settings. Login to your account and enable POP3.
Yahoo Mail SMTP POP3 Server
smtp.mail.yahoo.com
SSL Port 465
pop.mail.yahoo.com
SSL Port 995
Yahoo Mail Plus SMTP POP3 Server
plus.smtp.mail.yahoo.com
SSL Port 465
plus.pop.mail.yahoo.com
SSL Port 995
Yahoo UK SMTP POP3 Server
smtp.mail.yahoo.co.uk
SSL Port 465
pop.mail.yahoo.co.uk
SSL Port 995
Yahoo Deutschland SMTP POP3 Server
smtp.mail.yahoo.de
SSL Port 465
pop.mail.yahoo.de
SSL Port 995
Yahoo AU/NZ SMTP POP3 Server
smtp.mail.yahoo.com.au
SSL Port 465
pop.mail.yahoo.com.au
SSL Port 995
O2 SMTP POP3 Server
smtp.o2.ie
smtp.o2.co.uk
pop3.o2.ie
pop3.o2.co.uk
AT&T SMTP POP3 Server
smtp.att.yahoo.com
SSL Port 465
pop.att.yahoo.com
SSL Port 995

NTL @ntlworld.com SMTP POP3 Server

smtp.ntlworld.com
SSL Port 465
pop.ntlworld.com
SSL Port 995

BT Connect SMTP POP3 Server

pop3.btconnect.commail.btconnect.com

BT Openworld & BT Internet SMTP POP3 Server

mail.btopenworld.com
mail.btinternet.com
mail.btopenworld.com
mail.btinternet.com

Orange SMTP POP3 Server

smtp.orange.net
smtp.orange.co.uk
pop.orange.net
pop.orange.co.uk

Wanadoo UK SMTP POP3 Server

smtp.wanadoo.co.ukpop.wanadoo.co.uk
Hotmail SMTP POP3 Server
smtp.live.com
StartTLS Port 587
pop3.live.com
SSL Port 995
O2 Online Deutschland SMTP POP3 Server
mail.o2online.depop.o2online.de
T-Online Deutschland SMTP POP3 Server
smtpmail.t-online.de (AUTH)
securesmtp.t-online.de (SSL)
popmail.t-online.de (AUTH)
securepop.t-online.de (SSL)
1&1 (1and1) SMTP POP3 Server
smtp.1and1.com
StartTLS Port 25 or 587
pop.1and1.com
SSL Port 995
1&1 Deutschland SMTP POP3 Server
smtp.1und1.de
StartTLS Port 25 or 587
pop.1und1.de
SSL Port 995
Comcast SMTP POP3 Server
smtp.comcast.net
Port 587
mail.comcast.net
Verizon SMTP POP3 Server
outgoing.verizon.net
Port 587
incoming.verizon.net

posted @ 2012-02-13 15:12 cnbarry 阅读(813) | 评论 (0)编辑 收藏

Target:
部署环境-mssql 2005+XP pro3+IIS5.1
资料-.net网站+mssql2005备份的bak数据库文件

Detail:
一、配置和安装环境。
第一手资料只有.net开发的网站+mssql2000备份数据库。
问题1:安装IIS(我用过的电脑4年中很少遇到是已经安装IIS本地服务器的,除非是学校老师或者已经有开发人员用过的),这里说的安装就不讨论那种原来电脑都已经有相应IIS配置文件和服务的情况了, 只说假设电脑只有IIS这项未安装服务而且均未准备安装文件的情况。由于已经有过成功安装IIS完整安装包的经验,这次安装的时候相对没那么吃力;

首先:选择合适的安装包。大概有两点需要注意,一个先检查OS情况, 有些非专业版XPOS根本就没有IIS这一项服务配置的, 自己想办法吧,只要能在添加/删除组件服务一项能看到IIS服务就可以了。 二个找对应版本的IIS安装包下载, 比如,我用到的XP pro3 是可以装IIS5.1的,其它版本如IIS6里,是找不到需要的安装文件的。(这部分不截图了)

其次:选择合适的版本。设置服务器默认网站里的信息,包括添加asp文件支持等。这里有一点非常重要, 如果是.net技术做的网站,必须事先在电脑安装.net framework 2.0或者4.0,然后在asp一项配置选择net2.0,或者net4.0;如果装了.net还出现server application error, 很明显是服务器组件安装的配置不正确,要么是没有找到需要的系统文件,要么就是安装的.net或其他必须的组件版本不对应; 如果部分文件出现找不到, 重新安装对应的版本,如果是不对应, 可以尝试删除.net其它所有版本,.net3 .net4(是否是当前需要的版本可以到services管理中查看是否还存在(services.msc一下)。 

IIS和dot net都配置成功,访问本地的htm文件没有问题,访问aspx文件也木有问题。

二、安装数据库。
之所以要安装数据库,还是回到原来说的, 一切皆源自他人。 首先看,mssql2005数据库, okay, 锁定官网下载mssql server, 太慢,改到国内软件下载. 先下载了一款 Microsoft SQL Server Management Studio Express, 安装完成结果全英文, 而且无法恢复备份, 后来网上找一下才发现要的express版本不对,已经安装的只有查看数据库功能没有新建数据库、DB管理等功能于是换成了另一份较大的文件才处理掉这个问题。

不写了,多思考, 别瞎搞。 
--Barry
补充下:
MSSQL2000数据库备份的BAK文件网上有详细的转化过程图解和介绍。
可看:http://www.cnblogs.com/dlwang2002/archive/2009/03/20/1417953.html

posted @ 2012-02-12 20:35 cnbarry 阅读(1014) | 评论 (0)编辑 收藏

1. Introduction

(Note: There are two versions of this paper -- a longer full version and a shorter printed version. The full version is available on the web and the conference CD-ROM.) 
The web creates new challenges for information retrieval. The amount of information on the web is growing rapidly, as well as the number of new users inexperienced in the art of web research. People are likely to surf the web using its link graph, often starting with high quality human maintained indices such as Yahoo! or with search engines. Human maintained lists cover popular topics effectively but are subjective, expensive to build and maintain, slow to improve, and cannot cover all esoteric topics. Automated search engines that rely on keyword matching usually return too many low quality matches. To make matters worse, some advertisers attempt to gain people's attention by taking measures meant to mislead automated search engines. We have built a large-scale search engine which addresses many of the problems of existing systems. It makes especially heavy use of the additional structure present in hypertext to provide much higher quality search results. We chose our system name, Google, because it is a common spelling of googol, or 10100 and fits well with our goal of building very large-scale search engines.

1.1 Web Search Engines -- Scaling Up: 1994 - 2000

Search engine technology has had to scale dramatically to keep up with the growth of the web. In 1994, one of the first web search engines, the World Wide Web Worm (WWWW) [McBryan 94] had an index of 110,000 web pages and web accessible documents. As of November, 1997, the top search engines claim to index from 2 million (WebCrawler) to 100 million web documents (from Search Engine Watch). It is foreseeable that by the year 2000, a comprehensive index of the Web will contain over a billion documents. At the same time, the number of queries search engines handle has grown incredibly too. In March and April 1994, the World Wide Web Worm received an average of about 1500 queries per day. In November 1997, Altavista claimed it handled roughly 20 million queries per day. With the increasing number of users on the web, and automated systems which query search engines, it is likely that top search engines will handle hundreds of millions of queries per day by the year 2000. The goal of our system is to address many of the problems, both in quality and scalability, introduced by scaling search engine technology to such extraordinary numbers.

1.2. Google: Scaling with the Web

Creating a search engine which scales even to today's web presents many challenges. Fast crawling technology is needed to gather the web documents and keep them up to date. Storage space must be used efficiently to store indices and, optionally, the documents themselves. The indexing system must process hundreds of gigabytes of data efficiently. Queries must be handled quickly, at a rate of hundreds to thousands per second.

These tasks are becoming increasingly difficult as the Web grows. However, hardware performance and cost have improved dramatically to partially offset the difficulty. There are, however, several notable exceptions to this progress such as disk seek time and operating system robustness. In designing Google, we have considered both the rate of growth of the Web and technological changes. Google is designed to scale well to extremely large data sets. It makes efficient use of storage space to store the index. Its data structures are optimized for fast and efficient access (see section 4.2). Further, we expect that the cost to index and store text or HTML will eventually decline relative to the amount that will be available (see Appendix B). This will result in favorable scaling properties for centralized systems like Google.

1.3 Design Goals

1.3.1 Improved Search Quality

Our main goal is to improve the quality of web search engines. In 1994, some people believed that a complete search index would make it possible to find anything easily. According to Best of the Web 1994 -- Navigators,  "The best navigation service should make it easy to find almost anything on the Web (once all the data is entered)."  However, the Web of 1997 is quite different. Anyone who has used a search engine recently, can readily testify that the completeness of the index is not the only factor in the quality of search results. "Junk results" often wash out any results that a user is interested in. In fact, as of November 1997, only one of the top four commercial search engines finds itself (returns its own search page in response to its name in the top ten results). One of the main causes of this problem is that the number of documents in the indices has been increasing by many orders of magnitude, but the user's ability to look at documents has not. People are still only willing to look at the first few tens of results. Because of this, as the collection size grows, we need tools that have very high precision (number of relevant documents returned, say in the top tens of results). Indeed, we want our notion of "relevant" to only include the very best documents since there may be tens of thousands of slightly relevant documents. This very high precision is important even at the expense of recall (the total number of relevant documents the system is able to return). There is quite a bit of recent optimism that the use of more hypertextual information can help improve search and other applications [Marchiori 97] [Spertus 97] [Weiss 96] [Kleinberg 98]. In particular, link structure [Page 98] and link text provide a lot of information for making relevance judgments and quality filtering. Google makes use of both link structure and anchor text (see Sections 2.1 and 2.2).

1.3.2 Academic Search Engine Research

Aside from tremendous growth, the Web has also become increasingly commercial over time. In 1993, 1.5% of web servers were on .com domains. This number grew to over 60% in 1997. At the same time, search engines have migrated from the academic domain to the commercial. Up until now most search engine development has gone on at companies with little publication of technical details. This causes search engine technology to remain largely a black art and to be advertising oriented (seeAppendix A). With Google, we have a strong goal to push more development and understanding into the academic realm.

Another important design goal was to build systems that reasonable numbers of people can actually use. Usage was important to us because we think some of the most interesting research will involve leveraging the vast amount of usage data that is available from modern web systems. For example, there are many tens of millions of searches performed every day. However, it is very difficult to get this data, mainly because it is considered commercially valuable.

Our final design goal was to build an architecture that can support novel research activities on large-scale web data. To support novel research uses, Google stores all of the actual documents it crawls in compressed form. One of our main goals in designing Google was to set up an environment where other researchers can come in quickly, process large chunks of the web, and produce interesting results that would have been very difficult to produce otherwise. In the short time the system has been up, there have already been several papers using databases generated by Google, and many others are underway. Another goal we have is to set up a Spacelab-like environment where researchers or even students can propose and do interesting experiments on our large-scale web data.

source: http://infolab.stanford.edu/~backrub/google.html

posted @ 2012-02-04 20:42 cnbarry 阅读(367) | 评论 (0)编辑 收藏

Abstract

       In this paper, we present Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext. Google is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems. The prototype with a full text and hyperlink database of at least 24 million pages is available at http://google.stanford.edu/ 
       To engineer a search engine is a challenging task. Search engines index tens to hundreds of millions of web pages involving a comparable number of distinct terms. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from three years ago. This paper provides an in-depth description of our large-scale web search engine -- the first such detailed public description we know of to date. 
       Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. This paper addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. Also we look at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
posted @ 2012-02-04 20:39 cnbarry 阅读(287) | 评论 (0)编辑 收藏

Email Address
[A-Z0-9._%-]+@[A-Z0-9.-]+\.[A-Z]{2,4}
\b[A-Z0-9._%-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b
Email Address (Anchored)
^[A-Z0-9._%-]+@[A-Z0-9.-]+\.[A-Z]{2,4}$
Email Address without Consecutive Dots
\b[A-Z0-9._%-]+@(?:[A-Z0-9-]+\.)+[A-Z]{2,4}\b
Email Address on Specific Top Level Domains
^[A-Z0-9._%-]+@[A-Z0-9.-]+\.(?:[A-Z]{2}|com|org|net|biz|info|name|aero|biz|info|jobs|museum|name)$
posted @ 2012-02-01 21:03 cnbarry 阅读(201) | 评论 (0)编辑 收藏

Today, founder of the non-profit behind information archive Wikipedia, Jimmy Wales, announced that the site will go dark for 24 hours on Wednesday in protest of the Stop Online Piracy Act (SOPA).
Quote:
Jimmy Wales @jimmy_wales TWITTER update
Student warning! Do your homework early. Wikipedia protesting bad law on Wednesday! #sopa
While only the English version of the site will be down, it accounts for 25 million daily visitors according to Wales:
Quote:
Jimmy Wales @jimmy_wales 
comScore estimates the English Wikipedia receives 25 million average daily visitors globally.
When we talked to Wales in November, he told us that Wikipedia had over 420m unique monthly visitors, and there are now over 20 million articles on Wikipedia across almost 300 languages.
As we reported last week, the site was contemplating taking this action along with Reddit who announced that it would black out its site in protest against SOPA.
The 24 hour shutdown of Wikipedia will be replaced with instructions on how to reach out to your local US members of congress, and Wales says he hopes the measure will “melt phones” with volume:
Quote:
Jimmy Wales @jimmy_wales
This is going to be wow. I hope Wikipedia will melt phone systems in Washington on Wednesday. Tell everyone you know!
Along with Reddit, Wikipedia joins huge Internet names like WordPress, Mozilla, and all of the Cheezburger properties in Wednesday’s “black out” protest.
The proposed act endangers the future of sites like these by holding them directly accountable for content placed on them. It has been widely reported that if an act like this passed through and became actionable, many Internet businesses would suffer greatly due to new scrutiny placed on them by the government.
posted @ 2012-01-17 08:17 cnbarry 阅读(272) | 评论 (0)编辑 收藏

仅列出标题
共5页: 上一页 1 2 3 4 5 下一页