|
2006年2月19日
during analysis of "IO Error: Connection reset", many articles mentioned that it could be caused by java security code (accessing /dev/random) used in JDBC connection. However it is not the root cause in my case.
In my environment, Java already use /dev/urandom.
1. $JAVA_HOME/jre/lib/security/java.security
securerandom.source=file:/dev/./urandom
2. check with strace.
only -Djava.security.egd=file:/dev/../dev/urandom will trigger system call (read on /dev/urandom)
all other other path format like below are OK.
-Djava.security.egd=file:/dev/./urandom
-Djava.security.egd=file:///dev/urandom
3. Keep checking the retropy size, I have never seen it is exhaused.
while [ 1 ];
do
cat /proc/sys/kernel/random/entropy_avail
sleep 1
done
usually the avail is in the range from 1000 to 3000.
so far, there is no clue about the root cause of "IO Error: Connection reset".
I encountered many issue during installation of Oracle Grid Infrastructure(GI) and Database;
with the help of ariticle and documents found through Google search engine,
I finally made it. for records, here is the details issues encountered and solutions applied.
Major issues were encountered during GI installation.
Pre-installation tasks.
Issue 1: swapspace is not big enough; (1.3.1 Verify System Requirements)
grep MemTotal /proc/meminfo
264G
grep SwapTotal /proc/meminfo
2G
during OS installation, I take default option and swap space is only 2G.
Oracle recommend to have more than 16G swap space in case of more that 32G RAM.
dd if=/dev/zero of=/home/swapfile bs=1024 count=33554432
33554432+0 records in
33554432+0 records out
34359738368 bytes (34 GB) copied
mkswap /home/swapfile
mkswap /home/swapfile
chmod 0600 /home/swapfile
lessons learned: setup swap space properly according to DB requirement when installing OS.
Issue 2: cannot find oracleasm-kmp-default from Oracle site.
(1.3.6 Prepare Storage for Oracle Automatic Storage Management)
install oracleasmlib and oracleasm-support is easy, just download them from Oracle and install them;
Originally oracleasm kernel is provided by Oracle, but now I cannot find it from Oracle; finally I
realized that oracleasm kernel is now provided by OS vendor;
In my case, it should be installed from SUSE disk;
a. to get its name oracleasm-kmp-default
zypper se oracle
b. map dvd and install
zypper in oracleasm-kmp-default
rpm -qa|grep oracleasm
oracleasm-kmp-default-2.0.8_k3.12.49_11-3.20.x86_64
oracleasm-support-2.1.8-1.SLE12.x86_64
oracleasmlib-2.0.12-1.SLE12.x86_64
asm configure -i
asm createdisk DATA /dev/<...>
asm listdisks
--DATA
ls /dev/oracleasm/disks
Installation tasks:
Issue 3: always failed due to user equivalence check after starting installer OUI with user oracle.
however if I manully check with runcluvfy, no issue found at all.
./runcluvfy.sh stage -pre crsinst -n , -verbose
I worked around it by using another user to replace user oracle. but it triggered next issue.
Issue 4: cannot see ASM disks in OUI. no matter how I change the disk dicovery path. the disk list is empty.
but I can find disk manully.
/usr/sbin/oracleasm-discover 'ORCL:*'
Discovered disk: ORCL:DATA
Root cause is that the ASM is configured and created with user oracle. and I aming installing GI
with different user other than oracle; so I cannot see the Disk created.
change owner of disk device file solved the issue.
ls /dev/oracleasm/disks
chown /dev/oracleasm/disks -R
Issue 5: root.sh execution failed.
Failed to create keys in the OLR, rc = 127, Message:
clscfg.bin: error while loading shared libraries: libcap.so.1:
cannot open shared object file: No such file or directory
fixed the issue with command below:
zypper in libcap1
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2016-07-24 23:10:28.502:
[client(1119)]CRS-2101:The OLR was formatted using version 3.
I found a good document from SUSE,
Oracle RAC 11.2.0.4.0 on SUSE Linux Enterprise Server 12 - x86_64,
it make it clear that SUSE 12 is supported by Oracle GI 11.2.0.4, it also mentioned
Patch 18370031.
"During the Oracle Grid Infrastructure installation,
you must apply patch 18370031 before configuring the software that is installed. "
The patch 18370031 is actually mentioned in "Oracle quick installation guide on Linux",
but not mentioned in "Oracle quick installation guide on Linux". I majored followed up
with later one and missed Patch 18370031.
issue disappeared after I installed the patch 18370031.
./OPatch/opatch napply -oh -local /18370031
Errors in file :
ORA-27091: unable to queue I/O
ORA-15081: failed to submit an I/O operation to a disk
ORA-06512: at line 4
solved by change owner of disk DATA related file
ls -l /dev/oracleasm/iid
chown on folder /dev/oracleasm/iid and some .* hidden file.
Issue during DB installation
Issue 6: report error: in invoking target 'agent nmhs'
vi $ORACLE_HOME/sysman/lib/ins_emagent.mk
Search for the line
$(MK_EMAGENT_NMECTL)
Change it to:
$(MK_EMAGENT_NMECTL) -lnnz11
refer to
https://community.oracle.com/thread/1093616?tstart=0
很多年前装了Ubuntu和Windows的双系统,最近因为有了专门的电脑来装Ubuntu,所以把原先电脑上的Ubuntu卸载了,结果系统不能引导了。因为GRUB的原理是控制权从MBR到Ubuntu系统盘,然后Ubuntu系统盘再提供对windows的引导。现在Ubuntu系统被卸载了。这个启动的链条也就断了。
这个问题本身不难解决,借了一个Windows安装盘,恢复一下MBR即可。但是这个需要windows系统的Administrator密码。而我的系统不是我装的,我根本不知道这个密码。
有的帖子提到破解Administrator密码,试了一下,觉得太麻烦了。因为电脑上有数据,也不能重装。
最后的解决方案是在原先的Ubuntu分区上安装一个新的Windows。这样变成了windows的双系统。安装完重启之后可以进入任何一个系统(新的或者旧的Windows)。安装的过程中MBR被自动更新了。再下来就改一下老系统的Administrator密码,删除掉多于的Windows新系统即可。
用Gmail的时候不小心点了"存档"按钮,一封重要的邮件就此消失了好几天,今天才机缘巧合找到。
在网络上查到的解释是:
存档会将邮件从收件箱移至所有邮件,这样您不必删除邮件就可以整理收件箱。
难以理解,坦率地说,这个功能对我来说是徒增烦恼。看来任何工具都需要你去适应,磨合。
最近买了一个叫做“华容道”的玩具给儿子晚。这个游戏虽然号称是中国四大古典智力游戏之一。其实不过百年历史,而且是从国外引进的。不过本地化做得非常好,也算是创造性地吸收国外文明。
手工解决这个游戏有点难度,当然已经有人给出了解法;不过我还是自己用编程的方式解决了一遍。发现自己在这方面的编程还是比较弱。大部分时间花在了调试上。
刚开始是用的深度优先搜索。大致知道了答案应该长什么样。后来改进为广度优先搜索,得到了最优的解法。还有一个就是原先只考虑每次最多移动一格。后来发现传统的定义是一个块的所有连续移动都算作一步。相应地修改了实现算法。
最难的是做界面。为了调试,随便写了个Applet。但是给我儿子玩,就觉得拿不出手了。
Just use this blog to share some meta information. git://github.com/ueddieu/mmix.git http://github.com/ueddieu/mmix.git
After two weeks' struggle, I have successfully installed Gentoo, a popular GNU/Linux Distribution. For Records, the obstacles I encountered are listed below.
(but I can not remember the solution exactly)
0. failed to emerge gpm when I install the links package.
If I recall correctly, it is resolved by install gpm manually 1. I encounter issue when I install glib 2.22.5.
no update-desktop-database.
which is in dev-util/desktop-file-utils. When I try to emerge it, there is a circular dependency on glib. no solution
and I forget How I resolve the problem.
2. later after I install glib, with ~amd64 keyword I can install gpm-1.20.6, but it conflicts with the manually inatalled gpm.
I remove the conflicted file and emerge successfully.
3. Failed to emerge tiff.
edit packages.keywords to add the following.
/ ~amd64
I am able to use latest tiff in beta-version, which is unstable and masked out.
4. later atk-1.28.0 failed to emerge.
edit /etc/make.conf with the following.
FEATURES="-stricter".
then emerge successfully with only some complain. with out this seting. the warining from GCC will cause that emerge fail.
5. when I run
emerge --update system
actually gcc will be upgraded from 4.3.4 to 4.4.3. but it failed because of compilation warning, again. add "-stricter" into Features variable in /etc/make.conf work around it.
6. The installation takes a long time, the KDE itself take more than 10 hours. There is still a lot of improvement space! Anyway, it is nice to be able to use it daily.
在C:\Documents and Settings\<user_name>\Application Data\Subversion\servers文件中加入
all=*.*
[all]
http-proxy-host = ***.**.com
http-proxy-port = 8080
这里的all映射到所有的Server。
网络环境的复杂给我们的工作带来了一些影响。就拿Proxy的设置来说,本来理想的情况是在全局做一个设置就可以了,但是事实上我们要为每个程序做设置,而且语法还不一样。
今天从word文档中拷贝脚本到命令行执行。没有想到的是Word自动加入了空格,导致执行失败。具体如下。
call ttGridCreate('$TT_GRID');
被word变成了
call ttGridCreate(' $TT_GRID');
这个空格可不容易被发现,尤其你不是脚本作者的时候。提高警惕!
今天做了一个简单的性能测试。比较访问Java对象属性的各种方法的性能差异。
1. 直接访问对象的属性。
2. 用方法访问对象的属性。
3. 用Map来存储和访问。
4. 反射-Field 访问。
5. 反射-Method访问。
重复100次,结果如下(单位为纳秒)。
* 100 field access, 14,806<br/>
* 100 method access, 20,393<br/>
* 100 map access, 66,489<br/>
* 100 reflection field access, 620,190<br/>
* 100 reflection method access, 1,832,356<br/>
重复100000次,结果如下(单位为纳秒)。
*100000 field access, 2,938,362
*100000 method access, 3,039,772
*100000 map access, 10,784,052
*100000 reflection field access, 144,489,034
*100000 reflection method access, 37,525,719 <br/>
由结果可见:
1。getter/setter 的性能已经接近直接属性访问(大约慢50%),没有必要担心getter/setter的性能而采用直接属性访问。
2。用Map代替POJO的代价大约是比getter/setter慢三倍。
3。反射访问比getter/setter慢50到150倍。慎用。追求动态性的时候也要注意不菲的性能代价。
4。注意重复次数增加到100000次,方法访问和属性访问的差距缩小;更有意思的是,反射的Method访问比Field访问快四倍。这主要是JIT的作用。
该测试结果和原先的猜想基本符合。但是性能评估很容易得到片面的结论,如果有错误的地方,请大家不吝指正。谢谢。
0. I am reading the source code of Tomcat 6.0.26. To pay off the effort,
I documents some notes for record. Thanks for the articles about Tomcat
source code, especially the book <<How Tomcat works>>.
1. They are two concepts about server, one is called Server, which
is for managing the Tomcat (start and stop); another is called Connector,
which is the server to serve the application request. they are on the different
ports. The server.xml clearly show the difference.
<Server port="8005" shutdown="SHUTDOWN">
<Service name="Catalina">
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
although the server is the top level element, logically it should not be.
Actually in code, Bootstrap starts the service first, which
in turn start the Server and server's services.
2. My focus in on Connector part. I care how the request is services by the
Tomcat. Here are some key classes.
Connector --> ProtocolHandler (HttpProtocol
and AjpProtocol) --> JIoEndPoint
--> Handler(Http11ConnectionHandler
and AjpConnectionHandler)
3. Connector is most obervious class, but the entry point is not here.
The sequence is like this.
Connector.Acceptor.run()
--> JioEndPoint.processSocke(Socket socket)
-->SockeProcess.run()
-->Http11ConnectorHandler.process(Socket socket)
-->Http11Processor.process(Socket socket)
-->CoyoteAdapter.service(Request req, Response res)
The core logic is in method Http11Processor.process(Socket socket)
CoyoteAdapter.service(Request req, Response res) bridges between Connector module and Container module.
Any comments are welcome. I may continue the source code reading and dig deeper into it if time permit.
It is handy to be able to navigate the source code with Ctrl + ] in Cscope, but I always forget how to navigate back and waste effort many times. So for record, Ctrl+t can navigate back in Cscope.
One more time, Ctrl+] and Ctrl+t can navigate forth and back in Cscope.
How to read the source code in <<TCP/IP Illustrated Volume 2>>
1. Get the source code, original link provided in the book is not available now.
You may need to google it.
2. install cscope and vi.
3. refer to http://cscope.sourceforge.net/large_projects.html for the following steps.
It will include all the source code of the whole OS, not only the kernel.
find src -name '*.[ch]' > cscope.files
we actually only care kernel source.
find src/sys -name '*.[ch]' > cscope.files
4. wc cscope.files
1613 1613 45585 cscope.files
5. vim
:help cscope
then you can read the help details.
6. if you run vim in the folder where cscope.out resides. then it will be loaded
automaically.
7. Try a few commands.
:cs find g mbuf
:cs find f vm.h
They works. A good start.
P.S. this book is quite old, if you know it well and can recommend some better alternative for learning TCP/IP, please post a comments, Thanks in advance.
我儿子在弹钢琴,他阿姨说,“哥哥就喜欢弹难的。”
我外甥女说:“哥哥要弹男的。妹妹要弹女的。”
在高中的时候,知道了用筛法可以得到素数。当时我还有一个错误的关于寻找素数的猜测。
以为用两个素数相乘,其附近存在素数的几率很高。比如, 7×11 = 77, 其附近有79,正好是素数。
当时已经发现11×11=121。7×17=119;但是错误的理解为只有其中一个是平方或次幂时才成立。
后来有了计算机,编程验证了一下,发现有很多的反例。对当初的错误猜测羞赧不已。
这个猜测虽然错的离谱,但是和现在的素数理论,尤其是孪生素数还是很有关系的。现在已经知道,
素数有无穷多个,但是素数在自然数中所占的比例逐渐趋近于零。
因此孪生素数在自然数中的比例也是趋近于零的。现在还没有证明孪生素数是否有无穷多个。
这个猜测的朴素之处在于,任何两个素数之乘积A,要么A是3n+2,要么A是3n+1;如果是3n+2,则只有A+2
才有可能是素数;如果是3n+1,则只有A-2才有可能是素数。但是,事实上,这个猜测成立的比例非常的低。
写了一个程序验证了一下。16位的整数中,大概只有 10% 能使假设成立。
由于是在Proxy的网络环境,MSYSGIT 的 git clone 总是失败。需要配置如下环境变量。
export http_proxy="http://<proxy domain name>:<port>"
之后http协议git clone没有任何问题。但是用git 协议仍旧有问题。
之后发现git push 和 git pull 经常不能work。多次尝试后发现用更全的命令行参数可以解决问题。
过程如下。
git pull --fail
git pull origin --fail
git pull git@github.com:ueddieu/mmix.git --it works.
It seems the Command line short cuts are lack of some user information, such as user name "git".
(which is kind of strange at the first glance.)
git push --fail
git push origin --fail
git push git@github.com:ueddieu/mmix.git master --it works.
Anyway, now I can check in code smoothly. :)
There are a few cases in which the un-visible blank character will cause
problem, but it is hard to detect since they are not visible.
One famous case is the '\t' character used by Make file, it is used to mark
the start of a command. If it is replace by blank space character, it does
not work, but you can not see the difference if you only look at the make file.
This kind of problem may get the newbies crazy.
Last week, I have encounter a similar issue, which is also caused by unnecessary
blank space.
As you may know, '\' is used as line-continuation when you have a very long line, e.g.
when you configure the class path for Java in a property file, you may have something like this.
classpath=/lib/A.jar;/lib/B.jar;\
/lib/C.jar;/lib/D.jar;\
/lib/E.jar;/lib/log4j.jar;\
/lib/F.jar;/lib/httpclient.jar;
But if you add extra blank space after the '\', then you can not get the complete
content of classpath. Because only when '\' is followed by a '\n' on Unix or '\r''\n'
on Windows, it will work as line-continuation ; otherwise, e.g '\' is followed by
' ''\n', the line is complete after the '\n', the content after that will be the start of
a new line.
Fortunately, it is easy to check this kind of extra blank space by using vi in Unix.
use command '$' to go to the end of line, if there is no extra blank space after '\',
the current position should be '\', if there are any blank space after '\', the current position
is after the '\'.
妈妈和儿子
妈妈,你最近不吃鱼,变笨了吧――2009.6.2
儿子又要求录音了。我们按着台词在dialog。
“……”儿子
“……,I like mangoes”妈妈
“妈妈,我昨天刚教会你,又忘了?是I like watermelon。”
“哦,妈妈现在记性不好了!”
“是你这几天不吃鱼了吧,变笨了吧。明天多吃点!”
妈妈你穿这衣服蛮可爱――2009-6-9晚
儿子挑选的故事讲完了。“好,ok,我们睡觉吧!”我说。
“唉,妈妈,还没录音呢,我去拿mp3。”自从第一次提议给他录音,儿子每天都要求我能做到。
……
“……,妈妈,你蛮可爱的!……”录音正起劲,儿子突然插了一句题外话。
“什么?”我没听清。
“你穿这衣服蛮可爱的!”儿子贼贼的笑着又重复了一遍。“因为你的衣服象小斑马呀!”我终于明白。
我安装openldap时主要是参考了http://hexstar.javaeye.com/blog/271912
我遇到的一个新问题是执行ldapsearch报错如下:
can not find libdb-4.7.so.
我的解决办法是,建立符号链接/usr/lib/libdb-4.7.so, 后者指向/usr/local/BerkeleyDB/lib/libdb-4.7.so
之后没有遇到其他问题。
凡事都有其内在原因,即使是表面上毫无道理的行为,也有其内在的原因。今天再次认识到这个
道理,还是因为今天早上和我儿子的一段插曲。
今天早上,我儿子不肯起床,在床上哭闹,不让他妈妈去上班,要他妈妈陪他睡觉。
他妈妈要赶班车,没时间陪她,留我在家里。我陪他睡了一会,聊了十分钟,才知道他是有原因
的。
昨天晚上,我和她妈妈都很累,我就说今天我们早点睡觉,和儿子一起睡好了。可是因为刚刚
回上海,有很多事情要做,最后还是忙到十点半才睡。儿子就说爸爸说谎了。当然他可能还有
其它的原因,比如想我们每天和他一起睡觉。
The analysis of MOR(MXOR) instruction implementation in MMIXWare
-- A stupid way to understand the source code.
the implementation of MOR(MXOR) is in file: mmix-arith.w
436 octa bool_mult(y,z,xor)
437 octa y,z; /* the operands */
438 bool xor; /* do we do xor instead of or? */
439 {
440 octa o,x;
441 register tetra a,b,c;
442 register int k;
443 for (k=0,o=y,x=zero_octa;o.h||o.l;k++,o=shift_right(o,8,1))
444 if (o.l&0xff) {
445 a=((z.h>>k)&0x01010101)*0xff;
446 b=((z.l>>k)&0x01010101)*0xff;
447 c=(o.l&0xff)*0x01010101;
448 if (xor) x.h^=a&c, x.l^=b&c;
449 else x.h|=a&c, x.l|=b&c;
450 }
451 return x;
452 }
It takes me several hours to understand the details.
If we treat each octabyte as a matrix, each row corresponds to a byte, then
y MOR z = z (matrix_mulitiply) y
For a=((z.h>>k)&0x01010101)*0xff;
(z.h>>k)&0x01010101 will get the four last bit in (z.h>>k). depends on the bit in last row,
((z.h>>k)&0x01010101)*0xff will expand the bit (either 0 or 1) into the whole row.
e.g.
ff
* 0x01010101
---------------
= ff
ff
ff
ff
----------------
= ffffffff
(depending on the last bit in each row of z, the result could be #ff00ff00. #ff0000ff, etc.)
similarily, b=((z.l>>k)&0x01010101)*0xff; will expand the last bit in each byte into the
whole byte.
over all, after these two step, the z becomes the replication of it's last row, since k vary
from 0 to 7, it will loop on all the rows actually.
For c=(o.l&0xff)*0x01010101, it will get the last byte in o.l and populate it to other three byte.
since it will not only or/xor h but also l. it is not necessary populate it to o.h.
one example,
let (z.h>>k)&0x01010101 = 0x01000101, then a= 0xff00ffff;
let (z.l>>k)&0x01010101 = 0x01010001, then b= 0xffff00ff;
let (o.l&0xff)=0xuv, then c= 0xuvuvuvuv;
then a&c=0xuv00uvuv;
b&c=0xuvuv00uv;
consider the elements [i,j] in result x. in this round, what value was accumalated in by operation
or(xor).
it is the jth bit in last byte of o.l & ith bit in last column of z.(do not consider looping now.)
in this round, the 64 combination of i and j, contirbute the value to the 64 bits in z.
Noticed that o loop on y from last byte to first byte. There are 8 loop/rounds, in another round.
say kth round.
the elements[i,j] will accumuate the jth bit in last (k + 1)th row & the jth bit in last (k+1)th
column.
that means the jth column in y multiply the ith row in z. it conform to the definiton for
z matrix_multiply y.
游戏和数学有密切的联系。最近在玩九连环,感受更深。
之所以开始玩九连环,是因为在高德纳的书中提到了格雷码和九连环的关系。为了理解生成格雷码的算法,特意买了九连环来玩。毕竟书上的
描述没有实际玩起来那么容易理解。
通过这个游戏,我不仅会解九连环了,而且掌握的生成格雷码的一种算法。
A detailed reading process of a piece of beautiful and trick bitwise operation code.
The following code is from MMIXWare, it is used to implement the Wyde difference between two octabyte.
in file: "mmix-arith.w"
423 tetra wyde_diff(y,z)
424 tetra y,z;
425 {
426 register tetra a=((y>>16)-(z>>16))&0x10000;
427 register tetra b=((y&0xffff)-(z&0xffff))&0x10000;
428 return y-(z^((y^z)&(b-a-(b>>16))));
429 }
It is hard to understand it without any thinking or verification, here is the process I used
to check the correctness of this algorithm.
let y = 0xuuuuvvvv;
z = 0xccccdddd; (please note the [c]s may be different hex number.)
then y>>16 = 0x0000uuuu;
z>>16 = 0x0000cccc;
then ((y>>16)-(z>>16)) = 0x1111gggg if #uuuu < #cccc or
((y>>16)-(z>>16)) = 0x0000gggg if #uuuu >= #cccc
so variable a = 0x00010000 if #uuuu < #cccc or
variable a = 0x00000000 if #uuuu >= #cccc
similarly, we can get
variable b = 0x00010000 if #vvvv < #dddd or
variable b = 0x00000000 if #vvvv >= #dddd
for (b-a-(b>>16)))), there are four different result depending on the relation between a and b.
when #uuuu >= #cccc and #vvvv >= #dddd, (b-a-(b>>16)))) = 0x00000000;
when #uuuu >= #cccc and #vvvv < #dddd, (b-a-(b>>16)))) = 0x00001111;
when #uuuu < #cccc and #vvvv >= #dddd, (b-a-(b>>16)))) = 0x11110000;
when #uuuu < #cccc and #vvvv < #dddd, (b-a-(b>>16)))) = 0x11111111;
You can see that >= map to #0000 and < map to #1111
for y-(z^((y^z)&(b-a-(b>>16)))), when (b-a-(b>>16)))) is 0x00000000, z^((y^z)&(b-a-(b>>16))) is
z^((y^z)& 0) = z^0=z, so y-(z^((y^z)&(b-a-(b>>16))))=y-z.
similarily, when (b-a-(b>>16)))) is 0x11111111, z^((y^z)&(b-a-(b>>16))) is
z^((y^z)& 1) = z^(y^z)=y, so y-(z^((y^z)&(b-a-(b>>16))))=0.
when (b-a-(b>>16)))) is 0x11110000 or 0x11110000, we can treat the y and z as two separate wydes.
each wyde in the result is correct.
You may think it is a little stupid to verify such kind of details. but for my point of view,
without such detailed analysis, I can not understand the algorithm in the code. with the hard
work like this, I successfully understand it. The pleasure deserve the effort.
I am wondering how can the author discover such a genius algorithm.
昨天晚上,瑞瑞睡到十二点的时候,爬出被子,躺在外面。结果咳嗽得很厉害,吐了好几口。吃了枇杷膏很快就睡着了。
没过一会,他就开始做恶梦了。下面是他的梦话:
“爸爸的手不见了”
“爸爸掉下去了”
“我亲爱的爸爸不见了,我可怎么办阿”
估计是最近洪恩的GOGO学英语的DVD看多了。
After reading the <<MMIX: A RISC Computer for the New Millennium>>, I am ispired to Create a MMIX simulator in Java.
Donald Knuth already created a high quality MMIX simulater in C, why I still bother to creating a new one in Java.
First, I want to learn more about how the computer works. I think re-implement a simulator for MMIX can
help me gain a better understanding.
Second, I want to exercise my Java skills.
After about one month's work, I realize that I can not finish it by myself. I am looking for the help.
If you are interested in MMIX and know Java, Please give me a hand.
Currently I have finished most of the instructions, but some important and complex one are not completed
yet.
I have developed a few JUnit TestCase for some instructions, but it's way far from covering all the instructions (there are 256 instructions total).
Few of the sample MMIX program in Donald Knuth's MMIXware package, such as cp.mmo, hello.mmo can be
simulated successfully, but there are much more to support.
To help on this project, first you need the access to the current source code. It's hosted on Google
code. Please follow the steps below to access the source code.
Use this command to anonymously check out the latest project source code:
# Non-members may check out a read-only working copy anonymously over HTTP.
svn checkout http://mmix.googlecode.com/svn/trunk/ mmix-read-only
If you are willint to help, please comment on this blog with your email address.
There are
many questions coming into my mind when I read the Linux kernel book and source
code. As time goes by, I become more knowledgeable than before and can address
those questions by myself, here is the first question addressed by myself.
Q: why
kernel have to map the high memory in kernel space, why not just allocate the
high memory and only map it in user process.
A: Because
kernel also need to access the high memory before it returned the allocated
memory to user process. For example, kernel must zero the page or initialized
the page for security reason. Please refer to linux device driver page 9.
Q: why not
let the clib zero the page or initialize it, it saves the kernel's effort and simplifies
the kernel.
A: besides
Requesting memory through clib, user program can also request memory through
direct System call, in this situation, the security is not guaranteed, the
information in memory will be leaked.
9/26/2008 8:57AM
Today I want to research the different ways to substitute text in the file. For records, I written them down.
1. use Ultra Edit, it is super easy for a Windows user if you have Ultra Edit installed.
use Ctrl + R to get Replacement Wizard and follow you intuition.
2. use VI in Unix.
:s/xx/yy/
will replace xx with yy.
3. use filter, such as sed, and awk in Unix.
sed -e 's/xx/yy/g' file.in > file.out
replace xx with yy in all the lines. It seem sed will not change the original input file, so I redirect the out put to file.out
1 WYSIWYM vs WYSIWYG
WYSIWYM stands for What You See is What You Mean; WYSIWYG stands for What You See is What You Get;
Microsoft -- Word is always considered as a example of WYSIWYG. Today I have a look at the tool named LyX, which is an example of WYSIWYM. From an end user's point of view, there are more similarity than difference between them.
They both display the the resulted layout on the fly; they both provide button to typeset the document.
The difference I can see between then is -- LyX use text file, while Word use binary file. But I don't think it matters.
In my humble opinion, the real difference between Word and LyX/LaTeX is as the following. In Word, you typeset in the lower level, you can control all the details but it also need more effort. In LyX/LaTex, you typeset in higher level, you only need to figure out the logic structure of the document. The resulted layout is not decided by you, you actually just share the layout developed by the expert. I think it is the key advantage of WYSIWYM.
Yesterday, we found that the application can not send mail successfully; the performance of the module using email feature is also very bad. I suspect it caused by that the mail server host name can not be resolved in the application server.
I executed the following command
host <mail server host name>
It shows a strange IP. It means it can not properly resolve the mail server host name
Then I execute the command below.
man host
The output tells me to resort to /etc/resolv.conf
open it with
vi /etc/resolv.conf
The context is as following:
nameserver <name server 1>
nameserver <name server 2>
update the config with correct DNS server IP.
Everything is OK.
P.S. It seems that the ping and host commands are different. For some host name, I can ping it but I can not host it.
The reality is far from the idealism - the inelegance in Operating System
I am interested in Operating System, after I know more and more concepts, know more and more details, I realize that the reality is far from the idealism. The root cause is the history and to some extent, it is the back compatibility. we can not afford to make a brand new thing from scratch, we need to include many old things in any things.
Let me give some example about how the history make the current Operation System become complicated and inelegant.
1. DMA
DMA stands for Direct Memory Access, which is a way to improve the parallelism in computer system. Basically, with DMA, peripheral device can access main Memory simultaneously when CPU is running. but for historical reason, in X86 platform, some DMA device only have 24 bit address line. which limit the memory scope to 16M. since X86 platform is also lack of IO-MMU to remap the address, the memory can be used in DMA is [0,16M). It definitely complicated the memory management.
2. High Memory
Since Linux kernel has only 1G linear address space, it can not address all the 4G physical memory in 32 bit machine. This is actual a design issue in Linux for historical reason. it does not predict that some day, the physical memory will become so large. Later in order to support more than 1 G physical memory, CONFIG_HIGHMEM compile option was added. There are also other way to fix this problem, such as 4G kernel space v.s. 4G user space.
3. PAE
PAE stands for Physical Memory Extension, PAE make it possible to support up to 64G physical memory. but to me, it is just a temporary solution, does not deserve the effort. I even do not want to have a look on the corresponding document. It does not make too much sense. I prefer to directly move to 64 bit platform. 64 bit platform has its own problems though.
the above is just some inelegant in hardware. majorly cause by historical reason. I am wondering how can we keep up the quick development under the burden of history. maybe at some point, we finally need to throw away the history and move on with a brand new start.
Virtual Memory Area
Virtual Memory Area is also called Memory Region in some book.
In the process address spaces, there are many memory areas, contiguous addresses wil be divided into different memory area if the access right of them are different. For example, in one Java Process, there are 359 memory areas.
so the kernel need to find a effective way to insert into, remove from, search from the list of memory areas. The semantics of find_area API is the as the following.
return null if
1. The list itself is empty.
2. The list is not empty, and the address is big than the last memory area.
return found area if
1. the address is in the region of one area.
2. the address is not in the region of any area. but is not bigger than the last area.
it means it is in the hole between areas. right area besides the hole is returned.
The kernel are trying to use as little resource as possible. Here is an example, Originally, in kernerl 2.4, the size of Kernel Stack is 8K. Now, in kernel 2.6, it could be 4K, if you enable it in compilaiton time.
Why will kernel spend effort to support such a feature when most of PC have more than 1 Gigabyte memonry. I think it has something to do with the C10K probleum; C10K means Concurrent 10 Thousand Processes(Threads). considering a system with more thant 10 thousand processes, such as a WEB server, the save of 4K in every kernel stack will become 4K * 10 K = 40 M tatal save of memory, which is a big deal!
How is it possible to achieve that? originally the kernel mode stack is also used in Exception and Interrupt handling, but Exception and Interrupt handling is not specific to any process. so in 2.6, Interrupt and Exception will have their own Stack for each CPU. Kernel stack is only used by process in the kernel mode. so the acutal kernel stack did not become small.
2.4 8K Stack shared between process kernel mode, Exception, Interrupt.
v.s
2.6 4K Stack specific for process kernel mode Stack
4K Stack specific for Exception Stack
4K Stack specific for Interrupt Stack
Besides this, in 8K stack of 2.4, task_struct is at the bottom of stack, which may cost about 1K, in 4K stack of 2.6, only thread_info is at the bottom of stack, the task_struct is put into a per-CPU data structre, thread_info is only about 50 bytes.
Here is just the high level summary of my understanding on Linux Kernel Memory Management. I think it can help achieve a better understanding of the book <<understanding linux kernel>>.
It is said, the memory management is most complex sub-system in linux kernel, at the same time, there aren't too much System Calls for it. Becuase most the the complex mechanism happens trasparently to the user process, such as COW(Copy On Write), On Demand Paging. For user process, to successfully refer to a linear memory address, the following factors are necessary:
vm_area_struct (Virtual Memory Area, Memory Region) are set up correctly.
Phsical memory are allocated.
Page Global Directory, Page Table, and the corresponding entry are correclty set up according to Virtual Memory Area and Phisical Meory.
This three factors can be further simplified as
Virtual Memory
Phisical Memory
Mappting between Virtual Momory and Phisical Memory.
From user process's perspective, only Virtual Memory is visible, when user process applys for memory, he got virtual memory; phisical memory may not be allocated yet. All these three factors are managed by the kernel, they can be thought of as three resource managed by the kernel. kernel not only need to manage the Virtual Memoty in user address space, but also need to manage Virtual Memory in kernel address space.
When user process try to use his virtual memory, but the phisical memory is not allocated yet. Page Exception happens, kernel take charge of it and allocate the phisical memory and set up the mapping. user process reexecute the instruction and everything go forward smoothly. It's called On Demand Paging.
Besides that there are many more concepts, such as Memory mapping, non-linear memory mapping. I will continue this article when I dig into the details.
ps -H -A
can show the relationship between all the processes in a tree format. it is helpful when you want to research the internals of UNIX.
init
keventd
ksoftirqd/0
bdflush
kswapd
we can see from the above that all the process are the children of init (directly or indirectly). especially the kernel thread are also the children of init process.
process 0 is special, it is not displayed.
From the following:
sshd
sshd
sshd
bash
vim
cscope
sshd
sshd
bash
ps
we can see that how ssh works. actually I have created two ssh session to the server.
根据以下Xusage.txt中的说明:
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
Java -Xms512M 应该为Java分配至少512M的内存,但是在Linux中用TOP查看,其RSS和SIZE的值远小于512M。我的理解是Java向操作系统申请内存时,用的是mmap2或者old_mmap系统调用,这两个系统调用其实都没有真正分配物理内存,而仅仅是分配了虚拟内存。所以预先分配的这些内存要到实际使用时才能落实到位。
There are not too much grammar. here is just the incomplete summary for the future reference.
meta-character
. any character
| or
() grouping
[] character class
[^] negative character class
Greedy Quantifier
? optional
* any amount
+ at least one
lazy quantifier
??
*?
+?
possessing quantifier
?+
*+
++
position related
^ start ot the line
\A
$ end of the line
\Z
\< start of the word
\> end of the word
\b start or end of the word
non-capturing group (?:Expression)
non-capturing atomic group (?>Expression)
positive lookahead (?=Expression)
negative lookahead (?!Expression)
positive lookbehind (?<=Expression)
negative lookbehind (?<!Expression)
\Q start quoting
\E end quoting
mode modifier
(?modifier)Expression(?-modifier)
valid modifier
i case insensitive match mode
x free spacing
s dot matches all match mode
m enhanced line-anchor match mode
(?modifier:Expression)
comments:
(?#Comments)
kernel memory mapping summay
Today, finally I become clear about the relationship between
fixed mapping
permanent kernel mapping
temporary kernel mapping
noncontiguous memory area mapping
(I feel that most of the name is not appropriate, to some text, it will mislead the reader.)
4G linear virtual address space is divided into two major part.
kernel space mapping [3G, 4G)
user space mapping [0, 3G)
kernel space mapping is divided into more pieces
linear mapping [3G, 3G + 896M)
non linear mapping [3G + 896M + 8M, 4G)
1. Fixed Mapping (wrong name, should be compile time mapping, the virtual address is decided in compile time. )
2. Temporary mapping
3. Permanent mapping
4. noncontiguous memory area mapping (Vmalloc area)
The following is the diagram for the reference.
FIXADDR_TOP (=0xfffff000)
fixed_addresses (temporary kernel mapping is part of it)
#define __FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
FIXADDR_START (FIXADDR_TOP - __FIXADDR_SIZE)
temp fixed addresses (used in boot time)
#define __FIXADDR_BOOT_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
FIXADDR_BOOT_START (FIXADDR_TOP - __FIXADDR_BOOT_SIZE)
Persistent kmap area (4M)
PKMAP_BASE ( (FIXADDR_BOOT_START - PAGE_SIZE*(LAST_PKMAP + 1)) & PMD_MASK )
2*PAGE_SIZE
VMALLOC_END (PKMAP_BASE-2*PAGE_SIZE) or (FIXADDR_START-2*PAGE_SIZE)
noncontiguous memory area mapping (Vmalloc area)
VMALLOC_START (((unsigned long) high_memory + 2*VMALLOC_OFFSET-1) & ~(VMALLOC_OFFSET-1))
high_memory MIN (896M, phisical memory size)
below the excerp of the source code.
#ifdef CONFIG_X86_PAE
#define LAST_PKMAP 512
#else
#define LAST_PKMAP 1024
#endif
#define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long) high_memory + \
2*VMALLOC_OFFSET-1) & ~(VMALLOC_OFFSET-1))
#ifdef CONFIG_HIGHMEM
# define VMALLOC_END (PKMAP_BASE-2*PAGE_SIZE)
#else
# define VMALLOC_END (FIXADDR_START-2*PAGE_SIZE)
#endif
enum fixed_addresses {
FIX_HOLE,
FIX_VDSO,
FIX_DBGP_BASE,
FIX_EARLYCON_MEM_BASE,
#ifdef CONFIG_X86_LOCAL_APIC
FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */
#endif
#ifdef CONFIG_X86_IO_APIC
FIX_IO_APIC_BASE_0,
FIX_IO_APIC_BASE_END = FIX_IO_APIC_BASE_0 + MAX_IO_APICS-1,
#endif
#ifdef CONFIG_X86_VISWS_APIC
FIX_CO_CPU, /* Cobalt timer */
FIX_CO_APIC, /* Cobalt APIC Redirection Table */
FIX_LI_PCIA, /* Lithium PCI Bridge A */
FIX_LI_PCIB, /* Lithium PCI Bridge B */
#endif
#ifdef CONFIG_X86_F00F_BUG
FIX_F00F_IDT, /* Virtual mapping for IDT */
#endif
#ifdef CONFIG_X86_CYCLONE_TIMER
FIX_CYCLONE_TIMER, /*cyclone timer register*/
#endif
#ifdef CONFIG_HIGHMEM
FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */
FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1,
#endif
#ifdef CONFIG_ACPI
FIX_ACPI_BEGIN,
FIX_ACPI_END = FIX_ACPI_BEGIN + FIX_ACPI_PAGES - 1,
#endif
#ifdef CONFIG_PCI_MMCONFIG
FIX_PCIE_MCFG,
#endif
#ifdef CONFIG_PARAVIRT
FIX_PARAVIRT_BOOTMAP,
#endif
__end_of_permanent_fixed_addresses,
/* temporary boot-time mappings, used before ioremap() is functional */
#define NR_FIX_BTMAPS 16
FIX_BTMAP_END = __end_of_permanent_fixed_addresses,
FIX_BTMAP_BEGIN = FIX_BTMAP_END + NR_FIX_BTMAPS - 1,
FIX_WP_TEST,
__end_of_fixed_addresses
}
scale up - vertically scale
scale out - horizontally scale
scale out
1. Use share nothing clustering architectures
The session failover functionality cannot avoid errors completely when failures happen, as my article mentioned, but it will damage the performance and scalability.
2. Use scalable session replication mechanisms
The most scalable one is paired node replication, the least scalable solution is using database as session persistence storage.
3. Use collocated deployment instead of distributed one.
4. Shared resources and services
Database servers, JNDI trees, LDAP Servers, and external file systems can be shared by the nodes in the cluster.
5. Memcached
Memcached's magic lies in its two-stage hash approach. It behaves as though it were a giant hash table, looking up key = value pairs. Give it a key, and set or get some arbitrary data. When doing a memcached lookup, first the client hashes the key against the whole list of servers. Once it has chosen a server, the client then sends its request, and the server does an internal hash key lookup for the actual item data.
6. Terracotta
Terracotta extends the Java Memory Model of a single JVM to include a cluster of virtual machines such that threads on one virtual machine can interact with threads on another virtual machine as if they were all on the same virtual machine with an unlimited amount of heap.
7. Using unorthodox approach to achieve high scalability
今天遇到了一个奇怪的Hibernate问题。(我用得hibernate是2.1版。比较旧,不知道这个问题在hibernate 3 中是否存在。)
下面这个是捕捉到的异常堆栈。
java.lang.ClassCastException: java.lang.Boolean
at net.sf.hibernate.type.StringType.set(StringType.java:26)
at net.sf.hibernate.type.NullableType.nullSafeSet(NullableType.java:48)
at net.sf.hibernate.type.NullableType.nullSafeSet(NullableType.java:35)
at net.sf.hibernate.persister.EntityPersister.dehydrate(EntityPersister.java:393)
at net.sf.hibernate.persister.EntityPersister.insert(EntityPersister.java:466)
at net.sf.hibernate.persister.EntityPersister.insert(EntityPersister.java:442)
at net.sf.hibernate.impl.ScheduledInsertion.execute(ScheduledInsertion.java:29)
at net.sf.hibernate.impl.SessionImpl.executeAll(SessionImpl.java:2382)
at net.sf.hibernate.impl.SessionImpl.execute(SessionImpl.java:2335)
at net.sf.hibernate.impl.SessionImpl.flush(SessionImpl.java:2204)
奇怪之处在于程序在本机Tomcat上运行情况良好,一旦部署到Linux服务器上就挂了。
仔细分析之后,发现要存储的对象既定义了get方法又定义了is方法。内容示例如下
public class FakePO {
String goodMan;
public String getGoodMan() {
return goodMan;
}
public void setGoodMan(String goodMan) {
this.goodMan = goodMan;
}
public boolean isGoodMan(){
return "Y".equalsIgnoreCase(goodMan);
}
}
怀疑可能是这个衍生的辅助方法isGoodMan()导致的问题。通过追踪Hibernate 2的源代码,发现hibernate 2是按如下方式通过反射API访问PO的。
private static Method getterMethod(Class theClass, String propertyName) {
Method[] methods = theClass.getDeclaredMethods();
for (int i=0; i<methods.length; i++) {
// only carry on if the method has no parameters
if ( methods[i].getParameterTypes().length==0 ) {
String methodName = methods[i].getName();
// try "get"
if( methodName.startsWith("get") ) {
String testStdMethod = Introspector.decapitalize( methodName.substring(3) );
String testOldMethod = methodName.substring(3);
if( testStdMethod.equals(propertyName) || testOldMethod.equals(propertyName) ) return methods[i];
}
// if not "get" then try "is"
/*boolean isBoolean = methods[i].getReturnType().equals(Boolean.class) ||
methods[i].getReturnType().equals(boolean.class);*/
if( methodName.startsWith("is") ) {
String testStdMethod = Introspector.decapitalize( methodName.substring(2) );
String testOldMethod = methodName.substring(2);
if( testStdMethod.equals(propertyName) || testOldMethod.equals(propertyName) ) return methods[i];
}
}
}
return null;
}
仔细读以上代码可以发现,Hibernate就是简单的遍历类的public方法,看是否和属性名称匹配,并不检查方法的返回值是否和属性的类型匹配。所以在我们的例子中,既可能返回get方法,也可能返回is方法,取决于public方法列表的顺序,而这个顺序恰恰是没有任何保证的。这也解释了为什么这个问题只能在特定平台上发生。
最近在看write系统调用的实现,虽然还有一下细节不是很清楚,但是大致的实现机理还是有一定的理解了。总结如下:
这里假设最普通的情况,不考虑Direct IO 的情况。从全家的高度看,要往一个文件中写入内容,需要一下几步。
1. sys_write 将用户进程要写的内容写入到内核的文件页面缓冲中。sys_write 本身到此就结束了。
2. pdflush 内核线程(定期或者由内核阈值触发)刷新脏的页面缓冲,其实只是提交IO请求给底层的驱动。
3. IO请求并不是同步执行的,而是由底层的驱动调度执行,发出DMA操作指令。
4. 物理IO完成之后会中断并通知内核,内核负责更新IO的状态。
先要去陪儿子睡觉了。有空会继续细化各个部分的实现。
sys_write 的调用过程。(我的linux内核版本为2.6.24,文件系统为ext3)
asmlinkage ssize_t sys_write(unsigned int fd, const char __user * buf, size_t count)
vfs_write(file, buf, count, &pos);
file->f_op->write(file, buf, count, pos);
这里的file->fop 是在open一个文件是初始化的函数指针,ext3文件系统对应的函数为do_sync_write。
下面是其实现的要点。
for (;;) {
300 ret = filp->f_op->aio_write(&kiocb, &iov, 1, kiocb.ki_pos);
301 if (ret != -EIOCBRETRY)
302 break;
303 wait_on_retry_sync_kiocb(&kiocb);
304 }
305
306 if (-EIOCBQUEUED == ret)
307 ret = wait_on_sync_kiocb(&kiocb);
filp->f_op->aio_write(&kiocb, &iov, 1, kiocb.ki_pos); 是实现的核心,其函数指针指向ext3_file_write。
307行的作用在于等待IO的完成。这里的IO完成指的是进入IO的队列而已,不是物理IO的完成。
generic_file_aio_write(iocb, iov, nr_segs, pos);
__generic_file_aio_write_nolock(iocb, iov, nr_segs, &iocb->ki_pos);
generic_segment_checks(iov, &nr_segs, &ocount, VERIFY_READ);
generic_file_buffered_write(iocb, iov, nr_segs, pos,ppos,count,written);
generic_file_direct_IO(WRITE, iocb, iov, pos, *nr_segs);
以下的调用序列还很长,一时还消化不了。仅供自己参考。
最近开始在Unix下读一些源代码.下面是一点体会.
1. 工欲善其事,必先利其器
我开始的时候是用find xargs 和 egrep 配合来搜索关键字, 看代码的效率很低.后来装了ctags,方便多了.最初没有装ctags, 是因为觉得可能装起来费劲, 其实还是很容易装的,也就是那么几步, google一下就搞定了.
2. 要及时实践.
虽然开始是读代码的方式比较笨,不过这种干劲非常有用,只有动手实践了,才有可能取得进步.否则的话,我可能还是停留在阅读书本上代码的阶段.
3. Unix下的工具看起来不如Windows的工具异用.其实不然,可能是门槛搞一些.多数人象我一样因此不敢去碰它.入门以后,会发现其实Unix下的工具真是短小精悍. 就拿VIM + Ctags 阅读源代码来说,觉得性价比高.符合80/20原则.
如题。
同名的Bean配置两次。根据配置文件的载入顺序,后定义的Bean生效。
看来还得自己小心,不能过于依赖Spring自己的检查机制(只能在单文件中检查重复的bean配置)。
J2EE项目中基本都是遵循分层架构的,自然包结构也是基于分层的。DAO层有DAO package。service 层有service package。在这些包下面再根据模块划分子包。
我觉得另一种可行的方案是根据模块划分包,如果包比较复杂,比如有超过十个的类,再根据层来划分子包。一般的模块比较简单,无需划分子包。
从高内聚,低偶合的原则来说,这样划分具有更高的内聚性。如果按层划分。其实同层的类并入多大的关系。考虑一下DAO层。这些DAO之间有多少联系?
新划分方法的好处是如果需要修改某个模块,修改的地方相对集中。因为都位于一个包内。
现在分层架构已经非常普遍,没有必要在包的划分上体现分层架构。在类名上体现分层架构即可。就是说分层架构无需通过包结构来体现。
新的划分方案可能有一个问题。各个模块之间可能有实现上的冗余。如果采用这个方案,需要在这点上采取预防措施。
当然这还是想法,没有在项目中实践。希望大家能指出这个方法可能带来的问题。
最近看了一些项目代码,了解了它得架构和设计。基本上很佩服。因为这些代码是几年以前写的。但是很多书中提到的模式,原则都得到了运用。但是也有一些地方有不同看法,我觉得很多地方用得并不恰当。 1. 滥用继承。比如在类结构中已经用了模板模式,照理说子类按照需要覆盖模板中的实现即可。可是不知出于何种目的。有的子类却是抽象的,需要从该抽象子类再次扩展,导致继承树不必要的深。 2. 滥用接口。经常看到接口中定义了一堆的方法,而且该接口只有一种实现。这种接口纯粹是摆设,这样的接口根本不能指望它有稳定性。实际情况是接口将随着实现的改变而改变。你说要这样的接口干吗? 3. 喜欢抽象出框架,但是这些框架对于当前的应用来说真实不必要的复杂。事实上没有增加重用,反而降低了代码的可读性。 4. 滥用工厂模式。大家不是觉得模式很难实际运用吗。真想用模式吗?那还不简单。给每个对象都定义一个工厂类不就的了吗?说心里话,我真看不出那些工厂模式到底实现什么设计上的好处。 5. 抽象的能力不够。在一个分页的实现中。把查寻字符串抽象到了一个类中。正确的方法应该是把查询结果抽象出来。 项目在进化的过程中很容易变得越来越难维护,毕竟很多不同的思想和不同人的代码揉和到了一起。出现各种问题也是正常的。 希望在别的项目中能引以为戒。
今天成功的完成了一个游戏--孔明锁
简单的说,道具就是六根长方形木条, 其中五根都是有榫头的, 另一根则是直木. 这六根木条最终搭成三个方向互相穿透的立体形状.
这个玩具我上个周末在黄山买的.试过四次, 一直没有成功. 今天吃过晚饭, 无意之中和老婆合作完成了.
基本上,我的分析能力还是可以的. 能够抓住要点. 但是动手能力不如我老婆, 眼看就要成功的时候,还是老婆眼疾手快, 完成最好一块.
其实刚玩的时候, 我就总结出一些基本要领.尽管不是成功的法则.但是能避免不必要的失败
1. 要以唯一的一根直木为思考的出发点.
2. 有两个方向各有两个木条和该直木相交. 其中一个方向需要居中的榫头(刚开始只是总结出这点). 另一个方向需要偏左或者偏右的榫头(今天才总结出这个要点).
3. 分析已经有的木条榫头的形状, 合理的组合不到20种.
4. 通过手工试验. 再摸索出一些规律, 养成空间的感觉. 半个小时应该可以完成.
我很少写Blog.为什么今天例外了. 因为我需要一些东西来证明自己的智商还在. 我说的智商是与具体知识不太相关的一种能力. 都说成功离不开自信. 自信不仅仅是一种信念. 应该通过对自己能力的检验来建立自信.
每次项目结束,都会发现有一堆的Bug。如何分析这些Bug,避免重蹈覆辙。
有两种分析方法, 根据Developer在修复Bug时选择的CommonCause,选择比重最大的CommonCause,
然后从各个方面分析RootcCause。总结出可以改进的地方。
我很难理解这种方法,主要是觉得每次都是泛泛而谈,对减少BUG没有真正的帮助。
(我确信这种分析方法没有太大的意义,因为缺乏对底层原因的了解. 而且Developer在选择common Cause的时候完全可能没有合适的而任选一个.经常看到的一个例子是缺乏UT. 这个就不一定是真正的原因,事实往往是做了UT却没有发现出Bug.这种分析方法是典型的不深入实际的浮夸作风, 依赖统计的数据而没有看到统计数据事实上可能存在问题. 这样的工作肯定效率不高. )
下面是我的一些思考。 分析的基础应该是Bug,而不是commonCause。直接从CommonCause开始分析,至少可能遗漏一下重要有价值的发现。 有些Bug是有可能避免的。而有些bug可以说没有什么好的对策。我们应该集中分析有可能避免的Bug。 至于如何分析具体Bug是否能避免,首先应该是造成该Bug的Developer自己分析,让大家知道Bug是如何形成的,然后由大家集体决定。(这样做的风险是大家能否接受。) 其次根据Bug引入的时间,和最终测试出的时间,总结有没有可以改进的大方。 能够由developer改进而消除的Bug。是最有希望避免再次发生的。 比如,有些bug是打字错误造成界面上显示的内容有瑕疵,一个可行的改进是每次都从需求文档拷贝。 注意必须要有措施能保证该经验能被所有Developer知道。 另一个例子是,我有一次,是的,我有一次再修正bug是没有清除彻底。在总结的时候我掌握了全局查找、替换的技巧。有效地避免了类似的错误再次发生。
并非所有错误都能由devloper来消除,有一些只能由Tester来消除。比如,一般来说,Tester总是比Developer对界面敏感,更能发现界面bug。 我觉得随着单元测试的进步,现在对Developer的测试水平的要求也提高了。这也许不尽合理。developer对实现花了很多精力,他不可能在测试上达到同样的水准。
Why we need Interface? The most important benefit come from the fact: The code depend on the interface no need to care about the implementaion class. and if the implementation class is changed later, the client code no need to update. This is the feature of Ploymophism of OOP, such as Java.
In some projects, the struts framework was adopted, so all the field need to be persisted is in ActionForm. In order to avoid that the Service layer /DAO layer will depends on the struts. One way is to define a interface which have getter and setter to access all the fields need to be persisted. The design is like this:
XXXActionForm --------> XXXInterface <--------------ServiceLayer/DAO Layer most of them are getter and setter I can understand this concern, it seems follow the paterns in Enterprise Application Architecture Pattern. but I can not agree this kinds of design. I believe this is misuse of interface.
First, in this kinds of design, if we add some fields, we need update the actionForm, them also need to update interface. It is boring, and in this case, the interface can not provide any abstraction so the interface need to evolve as the implementation changed.
Second, there is only one kind of implementaion in the system, so the interface can not provide the benifit from making use of polymorphism.
In a word, we can get nothing design benefit from Interface in this case, And Have burden to keep the implementaion and interface synchronized.
|