|
2012年2月21日
在Spring cloud config出来之前, 自己实现了基于ZK的配置中心, 杜绝了本地properties配置文件, 原理很简单, 只是重载了PropertyPlaceholderConfigurer的mergeProperties(): /** * 重载合并属性实现 * 先加载file properties, 然后并入ZK配置中心读取的properties * * @return 合并后的属性集合 * @throws IOException 异常 */ @Override protected Properties mergeProperties() throws IOException { Properties result = new Properties(); // 加载父类的配置 Properties mergeProperties = super.mergeProperties(); result.putAll(mergeProperties); // 加载从zk中读取到的配置 Map<String, String> configs = loadZkConfigs(); result.putAll(configs); return result; } 这个实现在spring项目里用起来还是挺顺手的, 但是近期部分spring-boot项目里发现这种placeholder的实现跟spring boot的@ConfigurationProperties(prefix = "xxx") 不能很好的配合工作, 也就是属性没有被resolve处理, 用@Value的方式确可以读到, 但是@Value配置起来如果属性多的话还是挺繁琐的, 还是倾向用@ConfigurationProperties的prefix, 于是看了下spring boot的文档发现 PropertySource order: * Devtools global settings properties on your home directory (~/.spring-boot-devtools.properties when devtools is active). * @TestPropertySource annotations on your tests. * @SpringBootTest#properties annotation attribute on your tests. * Command line arguments. * Properties from SPRING_APPLICATION_JSON (inline JSON embedded in an environment variable or system property) * ServletConfig init parameters. * ServletContext init parameters. * JNDI attributes from java:comp/env. * Java System properties (System.getProperties()). * OS environment variables. * A RandomValuePropertySource that only has properties in random.*. * Profile-specific application properties outside of your packaged jar (application-{profile}.properties and YAML variants) * Profile-specific application properties packaged inside your jar (application-{profile}.properties and YAML variants) * Application properties outside of your packaged jar (application.properties and YAML variants). * Application properties packaged inside your jar (application.properties and YAML variants). * @PropertySource annotations on your @Configuration classes. * Default properties (specified using SpringApplication.setDefaultProperties). 不难发现其会检查Java system propeties里的属性, 也就是说, 只要把mergerProperties读到的属性写入Java system props里即可, 看了下源码, 找到个切入点 /** * 重载处理属性实现 * 根据选项, 决定是否将合并后的props写入系统属性, Spring boot需要 * * @param beanFactoryToProcess * @param props 合并后的属性 * @throws BeansException */ @Override protected void processProperties(ConfigurableListableBeanFactory beanFactoryToProcess, Properties props) throws BeansException { // 原有逻辑 super.processProperties(beanFactoryToProcess, props); // 写入到系统属性 if (writePropsToSystem) { // write all properties to system for spring boot Enumeration<?> propertyNames = props.propertyNames(); while (propertyNames.hasMoreElements()) { String propertyName = (String) propertyNames.nextElement(); String propertyValue = props.getProperty(propertyName); System.setProperty(propertyName, propertyValue); } } } 为避免影响过大, 设置了个开关, 是否写入系统属性, 如果是spring boot的项目, 就开启, 这样对线上非spring boot项目做到影响最小, 然后spring boot的@ConfigurationProperties完美读到属性; 具体代码见: org.springframework.boot.context.properties.ConfigurationPropertiesBindingPostProcessor @Override public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException { ConfigurationProperties annotation = AnnotationUtils .findAnnotation(bean.getClass(), ConfigurationProperties.class); if (annotation != null) { postProcessBeforeInitialization(bean, beanName, annotation); } annotation = this.beans.findFactoryAnnotation(beanName, ConfigurationProperties.class); if (annotation != null) { postProcessBeforeInitialization(bean, beanName, annotation); } return bean; }
Spring默认不允许对类的变量, 也就是静态变量进行注入操作, 但是在某些场景比如单元测试的@AfterClass要访问注入对象, 而Junit的这个方法必须是静态的, 也就产生了悖论; 解决思路有两个: 思路1: 想办法对静态变量注入, 也就是绕过Spring只能运行非静态变量才能注入依赖的壁垒 思路2: 想办法@AfterClass改造为非静态 实现Junit RunListener, 覆盖testRunFinished方法, 这里去实现类似@AfterClass的功能, 这个方法是非静态的 不要用Junit, 改用TestNG, TestNG里的AfterClass是非静态的 用Spring的TestExecutionListeners, 实现个Listener, 里面也有个类似非静态的AfterClass的实现, 覆盖实现就行
思路2的几个方法都可以实现, 但是单元测试Runner需要用 而且改用TestNG工程浩大, 只能放弃掉这个思路 继续走思路1, 只能去绕过Spring的依赖注入的static壁垒了, 具体代码如下: @Autowired private Destination dfsOperationQueue; private static Destination dfsOperationQueueStatic; // static version @Autowired private MessageQueueAPI messageQueueAPI; private static MessageQueueAPI messageQueueAPIStatic; // static version
@PostConstruct public void init() { dfsOperationQueueStatic = this.dfsOperationQueue; messageQueueAPIStatic = this.messageQueueAPI; }
@AfterClass public static void afterClass() { MessageVO messageVO = messageQueueAPIStatic.removeDestination(dfsOperationQueueStatic); System.out.println(messageVO); }
其实就是用了@PostConstruct 来个偷梁换柱而已, 多声明个静态成员指向非静态对象, 两者其实是一个对象
知道activemq现在已经支持了rest api, 但是官方对这部分的介绍一笔带过 (http://activemq.apache.org/rest.html),
通过google居然也没搜到一些有用的, 比如像删除一个destination, 都是问的多,然后没下文. 于是花了一些心思研究了一下:
首先通过rest api获取当前版本所有已支持的协议 http://172.30.43.206:8161/api/jolokia/list
然后根据json输出关于removeTopic, removeQueue的mbean实现通过rest api删除destination的方法, 注意到用GET请求而不是POST,不然会报错 (官网的例子里用的wget给的灵感, 开始用了POST老报错)
import org.apache.activemq.command.ActiveMQQueue; import org.apache.activemq.command.ActiveMQTopic; import org.apache.http.auth.AuthScope; import org.apache.http.auth.UsernamePasswordCredentials; import org.apache.http.impl.client.BasicCredentialsProvider; import org.apache.http.impl.client.DefaultHttpClient; import org.springframework.http.HttpEntity; import org.springframework.http.HttpHeaders; import org.springframework.http.HttpMethod; import org.springframework.http.MediaType; import org.springframework.http.ResponseEntity; import org.springframework.http.client.ClientHttpRequestFactory; import org.springframework.http.client.HttpComponentsClientHttpRequestFactory; import org.springframework.web.client.RestTemplate;
import javax.jms.Destination; import javax.jms.JMSException; import java.util.Arrays;
public class MessageQueueAdmin { private static final RestTemplate restTemplate = getRestTemplate("admin", "admin");
private static String brokerHost = "172.30.43.206"; private static String adminConsolePort = "8161"; private static String protocol = "http";
public static void removeDestination(Destination destination) throws JMSException { String destName, destType; if (destination instanceof ActiveMQQueue) { destName = ((ActiveMQQueue) destination).getQueueName(); destType = "Queue"; } else { destName = ((ActiveMQTopic) destination).getTopicName(); destType = "Topic"; }
// build urls String url = String.format("%s://%s:%s/api/jolokia/exec/org.apache.activemq:" + "brokerName=localhost,type=Broker/remove%s/%s", protocol, brokerHost, adminConsolePort, destType, destName); System.out.println(url); // do operation HttpHeaders headers = new HttpHeaders(); headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON)); HttpEntity<String> entity = new HttpEntity<String>("parameters", headers); ResponseEntity response = restTemplate.exchange(url, HttpMethod.GET, entity, String.class); System.out.println(response.getBody()); }
public static void main(String[] args) throws JMSException { ActiveMQTopic topic = new ActiveMQTopic("test-activemq-topic"); removeDestination(topic); }
private static RestTemplate getRestTemplate(String user, String password) { DefaultHttpClient httpClient = new DefaultHttpClient(); BasicCredentialsProvider credentialsProvider = new BasicCredentialsProvider(); credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(user, password)); httpClient.setCredentialsProvider(credentialsProvider); ClientHttpRequestFactory rf = new HttpComponentsClientHttpRequestFactory(httpClient);
return new RestTemplate(rf); } }
其他的请求,应该都是类似jolokia的exec get request的格式:
https://jolokia.org/reference/html/protocol.html#exec
<base url>/exec/<mbean name>/<operation name>/<arg1>/<arg2>/.
用Spring JMS 的JmsTemplate从消息队列消费消息时发现,使用了CLIENT_ACKNOWLEDGE模式,消息返回后总是自动被ack,也就是被broker "Dequeued" protected Message doReceive(Session session, MessageConsumer consumer) throws JMSException { try { // Use transaction timeout (if available). long timeout = getReceiveTimeout(); JmsResourceHolder resourceHolder = (JmsResourceHolder) TransactionSynchronizationManager.getResource(getConnectionFactory()); if (resourceHolder != null && resourceHolder.hasTimeout()) { timeout = Math.min(timeout, resourceHolder.getTimeToLiveInMillis()); } Message message = doReceive(consumer, timeout); if (session.getTransacted()) { // Commit necessary - but avoid commit call within a JTA transaction. if (isSessionLocallyTransacted(session)) { // Transacted session created by this template -> commit. JmsUtils.commitIfNecessary(session); } } else if (isClientAcknowledge(session)) { // Manually acknowledge message, if any. if (message != null) { message.acknowledge(); } } return message; } finally { JmsUtils.closeMessageConsumer(consumer); } }
但是使用异步listener 就不会出现这个情况,搜了下google,发现果然存在这个问题 https://jira.spring.io/browse/SPR-12995 https://jira.spring.io/browse/SPR-13255 http://louisling.iteye.com/blog/241073 同步方式拉取消息,暂时没找到好的封装,只能暂时用这。或者尽量用listener, 这个问题暂时标记下,或者谁有更好的解决方案可以comment我
默认的配置有时候点不亮显示器,且分辨率很低,通过tvservice工具不断调试,发现下面的参数可以完美匹配了 修改 /boot/config.txt的下列参数 disable_overscan=1 hdmi_force_hotplug=1 hdmi_group=1 hdmi_mode=16 hdmi_drive=2 config_hdmi_boost=4 dtparam=audio=on
http://stackoverflow.com/questions/3294423/spring-classpath-prefix-difference
SIMPLE DEFINITION
The classpath*:conf/appContext.xml simply means that all appContext.xml files under conf folders in all your jars on the classpath will be picked up and joined into one big application context.
In contrast, classpath:conf/appContext.xml will load only one such file the first one found on your classpath.
<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <list> <value>classpath:*.properties</value> <value>classpath*:*.properties</value> </list> </property> </bean>
- IDEA_JDK (or IDEA_JDK_64) environment variable
- jre/ (or jre64/) directory in IDEA home
- registry
- JDK_HOME environment variable
- JAVA_HOME environment variable
java里如何修改console的历史输出信息呢?如果是当前行的修改可以简单想到"\r"的方案,但是如果要修改上一行呢? google了下原来还是有方法的,需要用到ansi的control sequences ANSI code用java写了个简单的例子,例子就是把曾经的output修改为其他字符串并恢复之后的打印,代码里加了sleep,主要方便理解各种控制序列的含义 //print some test messages System.out.println("1"); Thread.sleep(1000); System.out.println("22"); Thread.sleep(1000); System.out.println("333"); Thread.sleep(1000); System.out.println("4444"); Thread.sleep(1000);
/** * modify "333" to "-" */ // Move up two lines int count = 2; System.out.print(String.format("\033[%dA", count)); Thread.sleep(1000); // Erase current line content System.out.print("\033[2K"); Thread.sleep(1000); // update with new content System.out.print("-"); Thread.sleep(1000); // Move down two lines System.out.print(String.format("\033[%dB", count)); Thread.sleep(1000); // Move cursor to left beginning System.out.print(String.format("\033[D", count)); // continue print others Thread.sleep(1000); System.out.println("55555"); Thread.sleep(1000);
1. zookeeper basic/fast paxsos 的形象表述 https://www.douban.com/note/208430424/ 2. 详细介绍 http://blog.csdn.net/xhh198781/article/details/10949697
server.compression.enabled=true server.compression.mime-types=application/json,application/xml,text/html,text/xml,text/plain server.compression.min-response-size=4096 第一个参数打开压缩开关,第二个参数添加json reponse(尤其是为rest api),第三个参数是根据reponse的大小设置启用压缩的最小值(默认是2K,自己根据实际情况调整) 参考 http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#how-to-enable-http-response-compression
介绍centos7如何安装3.0以上的新版本mongodb https://docs.mongodb.org/manual/tutorial/install-mongodb-on-red-hat/
1. 默认的3个classloader: BootstrapClassloader (Native实现), ExtClassloader, AppClassloader (Java实现) 2. 3个加载器并不是真正的父子继承关系,而是逻辑上的,JVM启动先创建ExtClassloader instance,然后构造AppClassloader的时候传入ExtClassloader实例作为parent Launcher.ExtClassLoader extcl; try { extcl = Launcher.ExtClassLoader.getExtClassLoader(); } catch (IOException var10) { throw new InternalError("Could not create extension class loader", var10); }
try { this.loader = Launcher.AppClassLoader.getAppClassLoader(extcl); } catch (IOException var9) { throw new InternalError("Could not create application class loader", var9); } 关于双亲委派原理: 在加载类的时候,会看看parent有没有设定,如果设定了 就调用parent.loadClass方法,如果没设定(==null)也就是parent应该是BootstrapClassloader, 会调用native的findBootstrapClass来加载类,代码: try { if(this.parent != null) { c = this.parent.loadClass(name, false); } else { c = this.findBootstrapClassOrNull(name); } } catch (ClassNotFoundException var10) { ; }
目的是按照一定优先级别装载系统的lib,系统ext目录的lib,以及classpath的lib,防止系统的默认行为或者类的实现被修改。 3. java 类的动态加载 Java内置的ClassLoader总会在加载一个Class之前检查这个Class是否已经被加载过,已经被加载过的Class不会加载第二次。因此要想重新加载Class,我们需要实现自己的ClassLoader。 另外一个问题是,每个被加载的Class都需要被链接(link),这是通过执行ClassLoader.resolve()来实现的,这个方法是 final的,因此无法重写。Resove()方法不允许一个ClassLoader实例link一个Class两次,因此,当你需要重新加载一个 Class的时候,你需要重新New一个你自己的ClassLoader实例。
maven-shade-plugin 用来打可执行jar包, 可以把所有依赖的三方库都包括进来 exec-maven-plugin 可以执行外部命令, 在项目中对python代码进行编译, 配合maven-assembly-plugin来生成package maven-assembly-plugin 用来构建项目发行包, 要配合xml配置文件来组织包的结构,基本思路是从build环境copy到outputDirectory license-maven-plugin 用来生成项目用到的3方库的版权汇总 或者其他的一些用法 maven-dependency-plugin 用来生成项目库之间的依赖关系 appassembler-maven-plugin 可以为项目生成优雅的启动脚本 支持linux/win rpm-maven-plugin 用来为项目构建rpm安装包 maven-compiler-plugin 指定项目的jdk的编译兼容版本以及encoding类别
快捷键migrating 持续更新
发现一个不错的介绍shell中冒号的用法的文章 http://codingstandards.iteye.com/blog/1160298
项目用mvn exec:exec指令来启动server, 工作中需要调式server初始化的过程, 很容易想到mvnDebug, 但是发现设置的断点都没有hit, 反复调式多次都是如此,折腾了1个多小时, 突然看到stackoverflow 上有人说exec:exec是独立进程模式, mvnDebug的一些debug选项都被append到了父进程了. idea设置断点就然并卵了. 知道了问题所在解决就容易了, 只要修改pom.xml, 然后直接mvn exec:exec就能正常调式了 <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>${mvnexec.version}</version> <executions> <execution> <goals> <goal>exec</goal> </goals> </execution> </executions> <configuration> <includeProjectDependencies>true</includeProjectDependencies> <executable>java</executable> <workingDirectory>${basedir}/config/sim</workingDirectory> <classpathScope>runtime</classpathScope> <arguments> <argument>-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=4000</argument> <argument>-classpath</argument> <classpath/> <argument>com.ymiao.Main</argument> <argument>server</argument> <argument>${basedir}/config/sim/sim.yml</argument> </arguments> </configuration> </plugin> </plugins> </build>
总结就是exec:exec是要独立一个新进程来执行程序的, exec:java就相反, 其实用mvnDebug + exec:java也是理论可行的
After Centos 7.1 tobe installed on my t400, my wireless link "Intel 5100 AGN" cannot be managed by NetworkManager, which show in "PCI unknown" state. Googled many pages, most of them introduced how to scan wifi links by command line tool "iw", i tried all steps supplied by the pages but was blocked at the last step to get dynamical ipaddress by dhclient command "sudo dhclient wlp3s0 -v". The dhclient always complain "NO DHCPOFFERS received." (I doubted there should be some tricky to play with dhclient but which i am not faimiar with.. sad.. ) But i think there would be some extending tool for NetworkManager to manager wifi, then i google "NetworkManager wifi", i got "NetwrokManager-wifi plugin" from link https://www.centos.org/forums/viewtopic.php?f=47&t=52810 After following steps , i finally make wifi work well on centos 7.1 - yum install NetworkManager-wifi
- reboot machine (i tried logout and login, not work)
Problem is NetworkManager-wifi is not installed by default on centos 7.1, (would it be my mistake when install OS? strange..)
http://onlywei.github.io/explain-git-with-d3
项目中要用到MBean,于是快速体验下,体验过程中发现2个问题: - 自定义的Mbean的普通method能在jconsole的Mbeans里显示出来,但是涉及到geters/seters就无法显示了
- 如果MBean注册到下面形式创建的MBeanServer在Jconsole上无法显示的
MBeanServer server = MBeanServerFactory.createMBeanServer(); 但是如果注册到下面的形式创建的Server在Jconsole上是可以显示MBean的
MBeanServer server = ManagementFactory.getPlatformMBeanServer();
stackoverflow上也有人发现这个问题 http://stackoverflow.com/questions/7424009/mbeans-registered-to-mbean-server-not-showing-up-in-jconsole
http://www.ourd3js.com/wordpress/
Two compile issues i got:
One issue is: uuid_gen_unix.c: In function 'axutil_uuid_gen_v1':uuid_gen_unix.c:62:20: error: variable 'tv' set but not used [-Werror=unused-but-set-variable] struct timeval tv; ^cc1: all warnings being treated as errors
Solution is remove "-Werror" in all configure scripts find -type f -name configure -exec sed -i '/CFLAGS/s/-Werror//g' {} \; Another issue is:/usr/bin/ld: test.o: undefined reference to symbol 'axiom_xml_reader_free' /usr/local/axis2c/lib/libaxis2_parser.so.0: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status make[4]: *** [test] Error 1 make[4]: Leaving directory `/home/miaoyachun/softwares/test/axis2c-src-1.6.0/neethi/test'
As suggested in https://code.google.com/p/staff/issues/detail?id=198, the solution is disable neethi/test in following files: Finally, you could run "make; sudo make install"" successfully. Last thing should be paid attention to is you may need copy all head files of neethi/include into /usr/local/axis2c/include/axis2-1.6.0/ which needed when you compile customized web service.
Enjoining it!!
http://axis.apache.org/axis2/c/core/docs/axis2c_manual.html#client_api 的hello.c client 编译命令在我的ubuntu 12.04s上总是报错 gcc -o hello -I$AXIS2C_HOME/include/axis2-1.6.0/ -L$AXIS2C_HOME/lib -laxutil -laxis2_axiom -laxis2_parser -laxis2_engine -lpthread -laxis2_http_sender -laxis2_http_receiver -ldl -Wl,--rpath -Wl,$AXIS2C_HOME/lib hello.c /tmp/ccCYikFh.o: In function `main': hello.c:(.text+0x57): undefined reference to `axutil_env_create_all' hello.c:(.text+0x68): undefined reference to `axis2_options_create' hello.c:(.text+0x93): undefined reference to `axutil_strcmp' hello.c:(.text+0xeb): undefined reference to `axis2_endpoint_ref_create' hello.c:(.text+0x102): undefined reference to `axis2_options_set_to' hello.c:(.text+0x13d): undefined reference to `axis2_svc_client_create' hello.c:(.text+0x168): undefined reference to `axutil_error_get_message' hello.c:(.text+0x193): undefined reference to `axutil_log_impl_log_error' hello.c:(.text+0x1b1): undefined reference to `axis2_svc_client_set_options' hello.c:(.text+0x1d6): undefined reference to `axis2_svc_client_send_receive' hello.c:(.text+0x21d): undefined reference to `axiom_node_free_tree' hello.c:(.text+0x238): undefined reference to `axutil_error_get_message' hello.c:(.text+0x266): undefined reference to `axutil_log_impl_log_error' hello.c:(.text+0x28d): undefined reference to `axis2_svc_client_free' hello.c:(.text+0x2a8): undefined reference to `axutil_env_free' /tmp/ccCYikFh.o: In function `build_om_request': hello.c:(.text+0x2ed): undefined reference to `axiom_element_create' hello.c:(.text+0x307): undefined reference to `axiom_element_set_text' /tmp/ccCYikFh.o: In function `process_om_response': hello.c:(.text+0x337): undefined reference to `axiom_node_get_first_child' hello.c:(.text+0x351): undefined reference to `axiom_node_get_node_type' hello.c:(.text+0x367): undefined reference to `axiom_node_get_data_element' hello.c:(.text+0x381): undefined reference to `axiom_text_get_value' hello.c:(.text+0x396): undefined reference to `axiom_text_get_value' collect2: error: ld returned 1 exit status
仔细检查了gcc命令,头文件,库文件的路径都是对的,最后跟同事讨论才发现hello.c的位置的问题。。如果hello.c的位置放到了依赖库的右面 就会报类似错误。但是官方的例子应该是测试过的,怎么会有这个问题呢? 难道我的ubuntu 12.04的gcc比较严格? 修正后的gcc命令如下 gcc -o hello hello.c -I$AXIS2C_HOME/include/axis2-1.6.0/ -L$AXIS2C_HOME/lib -laxutil -laxis2_axiom -laxis2_parser -laxis2_engine -lpthread -laxis2_http_sender -laxis2_http_receiver -ldl -Wl,--rpath -Wl,$AXIS2C_HOME/lib
ubuntu 12.04s每次修改limit.conf文件后,要想让所有的后继ssession都能看到修改,一般要么重启系统,要么relogin系统。下面介绍一个不退出terminal就让修改立刻生效的方式 1. 修改/etc/pam.d/sudo,添加下面行到文件末尾 session required pam_limits.so
2. 修改 /etc/security/limits.conf, 比如 root soft nofile 65535 root hard nofile 65535
3. 执行sudo -i -u root 模拟登录初始化 另外发现centos 6系统/etc/pam.d/sudo已经默认enable pam_limits.so了,直接2,3就可以了。 当然如果用ssh重新登录下可能来的更快。。因为/etc/pam.d/sshd默认enable了pam_limits.so, 多输入个密码而已
ss(shadowsocks) 是基于socks5的,但是android studio sdk manager只支持http代理,导致android studio无法更新sdk tools,解决方法就是polipo,可以将socks5转换为http代理 具体方法见 https://github.com/shadowsocks/shadowsocks/wiki/Convert-Shadowsocks-into-an-HTTP-proxy
ubuntu上ibus经常出现不能输入中文的情况,用下面命令可以临时解决问题
ibus-daemon -r &
从jdk7最开始的release version (http://www.oracle.com/technetwork/java/javase/jdk7-relnotes-418459.html)的notes里看到
Area: HotSpot Synopsis: In JDK 7, interned strings are no longer allocated in the permanent generation of the Java heap, but are instead allocated in the main part of the Java heap (known as the young and old generations), along with the other objects created by the application. This change will result in more data residing in the main Java heap, and less data in the permanent generation, and thus may require heap sizes to be adjusted. Most applications will see only relatively small differences in heap usage due to this change, but larger applications that load many classes or make heavy use of the String.intern() method will see more significant differences. RFE: 6962931
今天有同事问为什么ubuntu上启动jenkins失败,我记得之前玩的时候并没有出现这种情况,于是跟踪了下,最终错误信息是: daemon: fatal: refusing to execute unsafe program: /usr/bin/java (/opt is group and world writable) 根本原因是机器装了多个版本的jdk, jdk所在的/opt父目录的权限放的比较大,按照daemon要求的限制到755 chmod -R 755 /opt 问题就解决了。 其实这个场景还是蛮常见的,遇到的人应该挺多的
latency = client send request time + network trans time (->)+ server receive request time+ reponse time + server send reponse time+ network trans time (<-)+ client receive reponse time
latency = first byte out, last byte in time
以前用centos的chkconfig来管理系统服务,而ubuntu上是没有这个工具的,google上提到一个替代品sysv-rc-conf, apt-get install下就可以直接用了,有个text console可以使用
Java程序的memory leak分析也可以用valgrind, 尤其是JNI程序尤其有用: valgrind --error-limit=no --trace-children=yes --smc-check=all --leak-check=full JAVA_CMD 特意写了个有leak的jni函数,用valgrind成功检查出来了 ==31915== 100 bytes in 1 blocks are definitely lost in loss record 447 of 653 ==31915== at 0x402CE68: malloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so) ==31915== by 0x60424F9: Java_MyJNI_hello (MyJNI.c:16)
在老版本valgrind(3.5.0) enable了--trace-children选项后可能出现错误: Error occurred during initialization of VM Unknown x64 processor: SSE2 not supported
升级到最新版可以解决这个问题,升级方法:下载src包 解压后执行 ./configure; make; make install
maven项目中有很多本地三方依赖,但是一个一个加入dependency + system scope又很麻烦,又貌似没有搜索到通配符的成功案例,但是从stackoverflow上看到一个插件addjars-maven-plugin, 可以很好解决这类需求: <build> <plugins> <plugin> <groupId>com.googlecode.addjars-maven-plugin</groupId> <artifactId>addjars-maven-plugin</artifactId> <version>1.0.2</version> <executions> <execution> <goals> <goal>add-jars</goal> </goals> <configuration> <resources> <resource> <directory>${basedir}/../lib</directory> </resource> </resources> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>${maven.assembly.version}</version> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> <appendAssemblyId>false</appendAssemblyId> </configuration> <executions> <execution> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> </plugins> </build> 把项目中依赖的三方jars全放到lib目录里,就全部会打包到release jar里了
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <arpa/inet.h> #include <inttypes.h>
uint64_t htonll(uint64_t val) { return (((uint64_t) htonl(val)) << 32) + htonl(val >> 32); }
uint64_t ntohll(uint64_t val) { return (((uint64_t) ntohl(val)) << 32) + ntohl(val >> 32); } int main() { uint64_t hll = 0x1122334455667788; printf("uint64: %"PRIu64"\n", hll); printf("0x%"PRIX64"\n", hll); printf("htonll(hll) = 0x%"PRIX64"\n", htonll(hll)); printf("ntohll(htonll(hll)) = 0x%"PRIX64"\n", ntohll(htonll(hll))); printf("ntohll(hll) = 0x%"PRIX64"\n", ntohll(hll)); // no change return 1; }
big endian(network byte order), little endian (host byte order in intel arch)
用jd-eclipse 插件来反编译java class文件的输出还是挺nice的,虽然阅读方便了 但是对debug确造成一定的困扰,主要问题是line number的不match. Google了下遇到类似问题的真不少。最终找到了解决方案: http://sourceforge.net/projects/realignmentjd/files/ -----------------
1. Download JD-Eclipse and JD-GUI - http://java.decompiler.free.fr/ and install. 2. Put a file realignment.jd.ide.eclipse_1.0.2.jar in eclipse/plugins directory. To use Realignment feature it is necessary to open the menu Preferences/General/Editors/File Associations and to select "*.class" file type and to choose "Realignment for JD Class File Editor" for Associated editors. Another possibility is the batch realignment after processing JD-GUI. To work properly you must to switch on the property "Display line numbers" in Help/Preferences of JD-GUI. To use this feature it is necessary to open the menu Preferences/Java/Decompiler/Batch Realignment and click button "Open dialog". Existing limitation: the realignment is performed only for the methods. To work properly it is necessary that the property "Display line numbers" in menu "Preferences/Java/Decompiler" was active.
JD-Eclipse插件 + realignment 补丁让优雅的debug class 文件成为可能。 如果只是为了阅读class代码,建议不要用 realignment 补丁,这样会降低代码的可读性(会多出大量的空行)
sudo dpkg -l \*erlang\* Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Description +++-=============================-=============================-========================================================================== ii erlang 1:14.b.4-dfsg-1ubuntu1 Concurrent, real-time, distributed functional language un erlang-abi-13.a <none> (no description available) ii erlang-appmon 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP application monitor ii erlang-asn1 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP modules for ASN.1 support rc erlang-base 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP virtual machine and base applications ii erlang-base-hipe 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP HiPE enabled virtual machine and base applications ii erlang-common-test 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP application for automated testing ii erlang-corba 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP applications for CORBA support ii erlang-crypto 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP cryptographic modules ii erlang-debugger 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP application for debugging and testing ii erlang-dev 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP development libraries and headers ii erlang-dialyzer 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP discrepancy analyzer application ii erlang-diameter 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP implementation of RFC 3588 protocol ii erlang-doc 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP HTML/PDF documentation un erlang-doc-html <none> (no description available) ii erlang-docbuilder 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP application for building HTML documentation ii erlang-edoc 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP module for generating documentation ii erlang-erl-docgen 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP documentation stylesheets ii erlang-et 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP event tracer application ii erlang-eunit 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP module for unit testing ii erlang-examples 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP application examples ii erlang-gs 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP graphics system ii erlang-ic 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP IDL compiler ii erlang-ic-java 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP IDL compiler (Java classes) ii erlang-inets 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP Internet clients and servers ii erlang-inviso 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP trace tool ii erlang-jinterface 1:14.b.4-dfsg-1ubuntu1 Java communication tool to Erlang ii erlang-manpages 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP manual pages ii erlang-megaco 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP implementation of Megaco/H.248 protocol ii erlang-mnesia 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP distributed relational/object hybrid database ii erlang-mode 1:14.b.4-dfsg-1ubuntu1 Erlang major editing mode for Emacs ii erlang-nox 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP applications that don't require X Window System ii erlang-observer 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP application for investigating distributed systems ii erlang-odbc 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP interface to SQL databases ii erlang-os-mon 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP operating system monitor ii erlang-parsetools 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP parsing tools ii erlang-percept 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP concurrency profiling tool ii erlang-pman 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP process manager ii erlang-public-key 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP public key infrastructure ii erlang-reltool 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP release management tool ii erlang-runtime-tools 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP runtime tracing/debugging tools ii erlang-snmp 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP SNMP applications ii erlang-src 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP applications sources ii erlang-ssh 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP implementation of SSH protocol ii erlang-ssl 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP implementation of SSL ii erlang-syntax-tools 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP modules for handling abstract Erlang syntax trees ii erlang-test-server 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP server for automated application testing ii erlang-toolbar 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP graphical toolbar ii erlang-tools 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP various tools ii erlang-tv 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP table viewer ii erlang-typer 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP code type annotator ii erlang-webtool 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP helper for web-based tools ii erlang-x11 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP applications that require X Window System ii erlang-xmerl 1:14.b.4-dfsg-1ubuntu1 Erlang/OTP XML tools
erlang-dev包含头文件,erlang-src包含源代码,erlang-debugger包含调试工具,erlang-base包含虚拟机 还可以根据package 名字的suffix察看erlang man doc,比如 man 3erl erlang man 3erl mnesia man 3erl io
命令行调试erlang程序报错: 2> c(hello, [debug_info]). {ok,hello} 3> im(). Call to i:im/0 in application debugger failed. ok
google之原来是erlang-debugger没装 sudo apt-get install erlang-debugger 然后Monitor窗口就出来了。不过用eclipse 的erlide插件调试也可以。
mvn 执行外部命令命令行模式 mvn exec:exec -Dexec.executable=sh -Dexec.workingdir=./bin -Dexec.args=hello.sh 配置文件形式 <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <executions> <execution> <id>test-exec</id> <phase>initialize</phase> <configuration> <executable>sh</executable> <workingDirectory>./bin</workingDirectory> <arguments> <argument>hello.sh</argument> </arguments> </configuration> <goals> <goal>exec</goal> </goals> </execution> </executions> </plugin>
mvn 生成java项目生成骨架 mvn archetype:generate -DgroupId=com.abc.product -DartifactId=product -DpackageName=com.abc.product -DarchetypeArtifactId=maven-archetype-quickstart 转成eclipse能识别的java 项目 mvn eclipse:eclipse 导入eclipse 然后coding mvn进行单元测试 <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.12.4</version> <configuration> <forkMode>pertest</forkMode> <excludes> <exclude>**/perftest/*.java</exclude> </excludes> <systemProperties> <property> <name>log4j.configuration</name> <value>target/test-classes/log4j.properties</value> </property> </systemProperties> </configuration> </plugin>
mvn进行code coverage统计 <reporting> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>emma-maven-plugin</artifactId> <version>1.0-alpha-3</version> <inherited>true</inherited> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>surefire-report-maven-plugin</artifactId> <inherited>true</inherited> </plugin> </plugins> </reporting>
mvn生成javadoc
<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <version>2.9</version> <configuration> <show>private</show> </configuration> <executions> <execution> <id>attach-javadocs</id> <goals> <goal>javadoc</goal> <goal>test-javadoc</goal> </goals> <phase>site</phase> </execution> </executions> </plugin>
最近项目要用JNI, 涉及到用java.library.path这个参数,开始以为只要ldconfig能识别到的so文件java 一定能找到,可惜并不是这样。。 要想java程序找到共享库还是要在执行java程序的时候指定java.library.path,用eclipse的话可以设置如下: Properties->Run/Debug settings->Arguments->VM arguments ----------------------------------------- -Djava.library.path=/home/miaoyachun/workspace/JNIC/Release 这个是传统的方式,google了下有个tricky的方式让程序动态修改java.library.path private static void loadJNILibDynamically() { try { System.setProperty("java.library.path", System.getProperty("java.library.path") + ":/home/miaoyachun/workspace/JNIC/Release/"); Field fieldSysPath = ClassLoader.class.getDeclaredField("sys_paths"); fieldSysPath.setAccessible(true); fieldSysPath.set(null, null);
System.loadLibrary("JNIC"); } catch (Exception e) { // do nothing for exception } } 事实上linux下还有个环境变量LD_LIBRARY_PATH,如果lib能在这个path里找到,java.library.path就不用配置了,而且不需要关心lib之间依赖的问题。java.library.path在这方面就弱很多,比如lib依赖其他目录的lib等。
今天有同事反应一个网络现象,一个多网卡环境,发给eth1的数据包都被eth0接收了。 第一印象是arp的问题。Google了下得到了确认,有个相关的kernal参数: arp_ignore - INTEGER Define different modes for sending replies in response to received ARP requests that resolve local target IP addresses: 0 - (default): reply for any local target IP address, configured on any interface 1 - reply only if the target IP address is local address configured on the incoming interface 2 - reply only if the target IP address is local address configured on the incoming interface and both with the sender's IP address are part from same subnet on this interface 3 - do not reply for local addresses configured with scope host, only resolutions for global and link addresses are replied 4-7 - reserved 8 - do not reply for all local addresses 默认是0,解决这个问题需要配置为1 临时配置下 sysctl -w net.ipv4.conf.all.arp_ignore=1 持久配置 sysctl -w net.ipv4.conf.all.arp_ignore=1 echo 'net.ipv4.conf.all.arp_ignore=1' >> /etc/sysctl.conf 这个弄好可以重启network服务来确保其他机器更新arp cache,如果不方便重启network,自己手动敲arping命令,比如 arping -q -A -c 1 -I eth1 10.197.24.177 这个命令是在 /etc/sysconfig/network-scripts/ifup-eth里看到的 如果机器比较少,也可以直接用arp -d 来删除相关的cache,建议上面的那种发广播的方式。
检测磁盘相关信息 smartctl -a /dev/sda (smartctl工具来自smartmontools, 可以apt-get install smartmontools来安装) 检测所有raid设备 mdadm -Ds 检测具体raid设备信息 mdadm -D /dev/md0 创建raid设备 mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=8 /dev/sdd /dev/sdc /dev/sdf /dev/sde /dev/sdg /dev/sdh /dev/sdi /dev/sdj
停止raid设备 mdadm -S /dev/md0 格式化raid设备 mkfs -t xfs -f /dev/md0 挂载raid设备 mount -t xfs /dev/md0 /raid 切换raid模式的步骤 1. umount if mounted : umount /raid 2. stop raid device: mdadm -S /dev/md0 3. create raid: mdadm --create ... 4. update /etc/mdadm.conf with output of 'mdadm -Ds', 用来开机自动组raid 5. update /etc/fstab, 如果需要开机自动mount Ref: http://francs3.blog.163.com/blog/static/40576727201212145744783/ http://hi.baidu.com/powersaven/item/1da2dc147a8be2e25f53b19e
关于 alternatives的用法 alternatives --install /usr/bin/java java /opt/jdk1.5.0_22/bin/java 15000 alternatives --install /usr/bin/javac javac /opt/jdk1.5.0_22/bin/javac 15000 alternatives --config java alternatives --config javac
最近切换桌面环境到ubuntu, 发现 alternatives这个工具改名了:update-alternatives 用法还是一样的。。
直接上C的实现 typedef struct Foo { int len; char name[100]; } Foo_t;
JNIEXPORT jint JNICALL Java_TestJNI_foo(JNIEnv *env, jobject obj, jobject fooObj) {
Foo_t * bar = malloc(sizeof(Foo_t)); jclass clazz; jfieldID fid;
//init the bar data of C strcpy(bar->name, "Yachun Miao"); bar->len = strlen(bar->name);
// mapping bar of C to foo clazz = (*env)->GetObjectClass(env, fooObj); if (0 == clazz) { printf("GetObjectClass returned 0\n"); return (-1); } fid = (*env)->GetFieldID(env, clazz, "len", "I"); (*env)->SetLongField(env, fooObj, fid, bar->len);
fid = (*env)->GetFieldID(env, clazz, "name", "Ljava/lang/String;"); jstring name = (*env)->NewStringUTF(env, bar->name); (*env)->SetObjectField(env, fooObj, fid, name);
free(bar); return 0; } 对应的Java调用 public class Foo { protected int len; protected String name; }
private static native int foo(Foo fooObj);
public static void main(String args[]) { System.loadLibrary("mylib");
Foo foo = new Foo(); foo(foo); System.out.println(foo.name); System.out.println(foo.len);
} 参考链接 http://www.steveolyo.com/JNI/JNI.html#CSTRCJ http://docs.oracle.com/javase/6/docs/technotes/guides/jni/spec/types.html
平时要用恶心的citrix Xenapp & citrix receiver 工作环境,装完后发现client端不能复制内容到server端,这样会对工作造成很大的困扰。 偶然发现citrix receiver的进程上有个-file的选项,会指定个临时配置文件,里面提及 ClipboardAllowed=off 于是grep下这个关键字,发现~/ICAClient/linuxx86/config/All_Regions.ini 也有个类似的 ClipboardAllowed=* 改为 ClipboardAllowed=true 然后重新开Xenapp session之后发现已经可以黏贴了。 按照这个思路,使用windows的同事使用如下的方式打通两端clipboard 1. 打开系统注册表编辑器 2. 定位HKEY_CURRENT_USER\Software\Citrix\ICA Client\Engine\Lockdown Profiles\All Regions\Lockdown\Virtual Channels\Clipboard 3. 修改ClipboardAllowed为1 4. 注销当前用户(或许需要) 如果Xenapp server上使用vnc viewer之类的Xclient,如果想打通到vnc server的clipboard,还需要在vnc server所在linux主机开启以下进程 vncconfig -nowin & 这个有点不理解,但确实可行。待真相。。
miaoyachun@ymiao:~$ /usr/lib/i386-linux-gnu/ibus/ibus-x11 --kill-daemon ^Z [1]+ Stopped /usr/lib/i386-linux-gnu/ibus/ibus-x11 --kill-daemon miaoyachun@ymiao:~$ bg [1]+ /usr/lib/i386-linux-gnu/ibus/ibus-x11 --kill-daemon & miaoyachun@ymiao:~$
然后就可以了。。
今天发现ubunto12.4没有默认的浏览器,导致所有的链接打开的时候从用gedit。google上找到了解决方法: 编辑以下内容: [Desktop Entry] Version=14.0 Name=Mozilla Firefox Web Browser Comment=Browse the World Wide Web GenericName=Web Browser Keywords=Internet;WWW;Browser;Web;Explorer Exec=/opt/firefox/firefox %u Terminal=false X-MultipleArgs=false Type=Application Icon=firefox Categories=GNOME;GTK;Network;WebBrowser; MimeType=text/html;text/xml;application/xhtml+xml;application/xml;application/rss+xml;application/rdf+xml;image/gif;image/jpeg;image/png;x-scheme-handler/http;x-scheme-handler/https;x-scheme-handler/ftp;x-scheme-handler/chrome;video/webm;application/x-xpinstall; StartupNotify=true Actions=NewWindow;
到文件~/.local/share/applications/firefox.desktop, 并保存退出。 执行: update-desktop-database ~/.local/share/applications/
配好以后"System Settings -> Detail -> Default Applications -> Web" list里就会有firefox了。 Ref: http://askubuntu.com/questions/166455/how-do-i-make-luakit-my-default-browser
When play with eclim (eclipse style vim), i found its "LocateFile" command not work well when "invpaste" enabled in vim. Solution is disable it by " set invpaste
发现用curl从jetty服务器上download文件的速度比较慢大概只有4M/s, 开始以为curl有默认的limit-rate,设置为1G以后发现还是慢。 然后开始怀疑是jetty server的问题。看SslSelectChannelConnector的responseBufferSize比较像,反复实验发现原来是由于headerBufferSize太小。 改为32K以后: SslSelectChannelConnector connector = new SslSelectChannelConnector(); connector.setRequestBufferSize(32768); 效果: curl -k https://USER:PASSWD@HOST:PORT/api/internal/file?filename=/path/to/file > /dest/to/file % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 723M 100 723M 0 0 29.3M 0 0:00:24 0:00:24 --:--:-- 29.4M
ref: http://wiki.eclipse.org/Jetty/Howto/Configure_Connectors
PHP curl option "CURLOPT_SSL_VERIFYPEER=false" is same thing of '-k or --insecure' option of curl command. ref: http://curl.haxx.se/docs/sslcerts.html
assume gprof and gprof2dot.py, graphviz be installed. 1. checkout memcached src code from git server 2. sh autogen.sh & ./configure 3. modify Makefile about CFLAGS, append option '-pg', after that do make 4. run memcached & do some actions by telnet 5. terminate memcached process, a gmon.out file will be generated. 6. gprof memcached gmon.out | /usr/bin/gprof2dot.py -n0 -e0 -c bw | dot -Tpng -o memcached_callgraph.png
The scriptlets also take an argument, passed into them by the controlling rpmbuild process. This argument, accessed via $1 is the number of packages of this name which will be left on the system when the action completes, except for %pretrans and %posttrans which are always run with $1 as 0 (%pretrans and %posttrans are available in rpm 4.4 and later). So for the common case of install, upgrade, and uninstall we have:
| install | upgrade | uninstall | %pretrans | $1 == 0 | $1 == 0 | (N/A) | %pre | $1 == 1 | $1 == 2 | (N/A) | %post | $1 == 1 | $1 == 2 | (N/A) | %preun | (N/A) | $1 == 1 | $1 == 0 | %postun | (N/A) | $1 == 1 | $1 == 0 | %posttrans | $1 == 0 | $1 == 0 | (N/A) |
Scriptlets ordering The scriptlets in %pre and %post are respectively run before and after a package is installed. The scriptlets %preun and %postun are run before and after a package is uninstalled. The scriptlets %pretrans and %posttrans are run at start and end of a transaction.
On upgrade, the scripts are run in the following order:
%pretrans of new package %pre of new package (package install) %post of new package %triggerin of other packages (set off by installing new package) %triggerin of new package (if any are true) %triggerun of old package (if it's set off by uninstalling the old package) %triggerun of other packages (set off by uninstalling old package) %preun of old package (removal of old package) %postun of old package %triggerpostun of old package (if it's set off by uninstalling the old package) %triggerpostun of other packages (if they're setu off by uninstalling the old package) %posttrans of new package
For detail, will ref: http://fedoraproject.org/wiki/Packaging:ScriptletSnippets
steps: 1. download maven project src code 2. cd project root dir & run "mvn eclipse:eclipse" 3. import it as eclipse java prject step2 will generate .classpath & .project file
1. 配置 compress 通过gzip 压缩转储以后的日志 nocompress 不需要压缩时,用这个参数 copytruncate 用于还在打开中的日志文件,把当前日志备份并截断 nocopytruncate 备份日志文件但是不截断 create mode owner group 转储文件,使用指定的文件模式创建新的日志文件 nocreate 不建立新的日志文件 delaycompress 和 compress 一起使用时,转储的日志文件到下一次转储时才压缩 nodelaycompress 覆盖 delaycompress 选项,转储同时压缩。 errors address 专储时的错误信息发送到指定的Email 地址 ifempty 即使是空文件也转储,这个是 logrotate 的缺省选项。 notifempty 如果是空文件的话,不转储 mail address 把转储的日志文件发送到指定的E-mail 地址 nomail 转储时不发送日志文件 olddir directory 转储后的日志文件放入指定的目录,必须和当前日志文件在同一个文件系统 noolddir 转储后的日志文件和当前日志文件放在同一个目录下 prerotate/endscript 在转储以前需要执行的命令可以放入这个对,这两个关键字必须单独成行 postrotate/endscript 在转储以后需要执行的命令可以放入这个对,这两个关键字必须单独成行 daily 指定转储周期为每天 weekly 指定转储周期为每周 monthly 指定转储周期为每月 rotate count 指定日志文件删除之前转储的次数,0 指没有备份,5 指保留5 个备份 tabootext [+] list 让logrotate 不转储指定扩展名的文件,缺省的扩展名是:.rpm-orig, .rpmsave, v, 和 ~ size size 当日志文件到达指定的大小时才转储,Size 可以指定 bytes (缺省)以及KB (sizek)或者MB (sizem).
2. 命令行选项 OPTIONS -v Turn on verbose mode.
-d Turns on debug mode and implies -v. In debug mode, no changes will be made to the logs or to the logrotate state file.
-f, --force Tells logrotate to force the rotation, even if it doesn’t think this is necessary. Sometimes this is useful after adding new entries to logrotate, or if old log files have been removed by hand, as the new files will be created, and logging will con- tinue correctly.
-m, --mail <command> Tells logrotate which command to use when mailing logs. This command should accept two arguments: 1) the subject of the mes- sage, and 2) the recipient. The command must then read a message on standard input and mail it to the recipient. The default mail command is /bin/mail -s.
-s, --state <statefile> Tells logrotate to use an alternate state file. This is useful if logrotate is being run as a different user for various sets of log files. The default state file is /var/lib/logrotate.status.
--usage Prints a short usage message.
选项-d 用来打开debug模式 选项-v 用来打开verbose模式 选项-f 用来打开强制选项,会强制发生log rotate即使不满足条件 debug模式跟verbose的区别就是: debug模式是dry-run版本的verbose模式,一般用来调试新加的logroate配置文件, 比如: /usr/sbin/logrotate /etc/logrotate.d/NEWCONFIG -df /usr/sbin/logrotate /etc/logrotate.d/NEWCONFIG -vf
Perl里面的predefined vars perlvar $- 当前页可打印的行数,属于Perl格式系统的一部分 $! 根据上下文内容返回错误号或者错误串 $” 列表分隔符 $# 打印数字时默认的数字输出格式 $$ Perl解释器的进程ID $% 当前输出通道的当前页号 $& 与上个格式匹配的字符串 $( 当前进程的组ID $) 当前进程的有效组ID $* 设置1表示处理多行格式.现在多以/s和/m修饰符取代之. $, 当前输出字段分隔符 $. 上次阅读的文件的当前输入行号 $/ 当前输入记录分隔符,默认情况是新行 $: 字符设置,此后的字符串将被分开,以填充连续的字段. $; 在仿真多维数组时使用的分隔符. $? 返回上一个外部命令的状态 $@ Perl解释器从eval语句返回的错误消息 $[ 数组中第一个元素的索引号 $ 当前输出记录的分隔符 $] Perl解释器的子版本号 $^ 当前通道最上面的页面输出格式名字 $^A 打印前用于保存格式化数据的变量 $^D 调试标志的值 $^E 在非UNIX环境中的操作系统扩展错误信息 $^F 最大的文件捆述符数值 $^H 由编译器激活的语法检查状态 $^I 内置控制编辑器的值 $^L 发送到输出通道的走纸换页符 $^M 备用内存池的大小 $^O 操作系统名 $^P 指定当前调试值的内部变量 $^R 正则表达式块的上次求值结果 $^S 当前解释器状态 $^T 从新世纪开始算起,脚步本以秒计算的开始运行的时间 $^W 警告开关的当前值 $^X Perl二进制可执行代码的名字 $_ 默认的输入/输出和格式匹配空间 $| 控制对当前选择的输出文件句柄的缓冲 $~ 当前报告格式的名字 $` 在上个格式匹配信息前的字符串 $’ 在上个格式匹配信息后的字符串 $+ 与上个正则表达式搜索格式匹配的最后一个括号 $< 当前执行解释器的用户的真实ID $ 含有与上个匹配正则表达式对应括号结果 $= 当前页面可打印行的数目 $> 当前进程的有效用户ID $0 包含正在执行的脚本的文件名 $ARGV 从默认的文件句柄中读取时的当前文件名 %ENV 环境变量列表 %INC 通过do或require包含的文件列表 %SIG 信号列表及其处理方式 @_ 传给子程序的参数列表 @ARGV 传给脚本的命令行参数列表 @INC 在导入模块时需要搜索的目录列表 $-[0]和$+[0] 代表当前匹配的正则表达式在被匹配的字符串中的起始和终止的位置 。 用perldoc perlvar 可以结合实例来了解更详细的信息。 下面针对$@写了个perl alarm的例子 #!/usr/bin/perl
my $timeout = 5;
$SIG{ALRM} = sub { die "alarm\n"; };
eval { alarm $timeout; sleep(6); alarm 0; };
if ($@) { print "timeout\n"; } else { print "not timeout\n"; }
JBoss Tattletale is a tool that can help you get an overview of the project you are working on or a product that you depend on.
The tool will provide you with reports that can help you
* Identify dependencies between JAR files * Find missing classes from the classpath * Spot if a class/package is located in multiple JAR files * Spot if the same JAR file is located in multiple locations * With a list of what each JAR file requires and provides * Verify the SerialVersionUID of a class * Find similar JAR files that have different version numbers * Find JAR files without a version number * Find unused JAR files * Identify sealed / signed JAR archives * Locate a class in a JAR file * Get the OSGi status of your project 使用方法 java -Xmx512m -jar tattletale.jar [-exclude=<excludes>] <scan-directory> [output-directory] 注意事项 tattletale只分析jar包之间的依赖,需要自己把所有的class文件打包放入scan-directory,然后把依赖的lib也放入同个目录
[abc] A single character: a, b or c [^abc] Any single character but a, b, or c [a-z] Any single character in the range a-z [a-zA-Z] Any single character in the range a-z or A-Z ^ Start of line $ End of line \A Start of string \z End of string . Any single character \s Any whitespace character \S Any non-whitespace character \d Any digit \D Any non-digit \w Any word character (letter, number, underscore) \W Any non-word character \b Any word boundary character () Capture everything enclosed (a|b) a or b a? Zero or one of a a* Zero or more of a a+ One or more of a a{3} Exactly 3 of a a{3,} 3 or more of a a{3,6} Between 3 and 6 of a
A tool cloc which wrote by perl could help u do that: http://cloc.sourceforge.net/ prompt> cloc perl-5.10.0.tar.gz 4076 text files. 3883 unique files. 1521 files ignored.
http://cloc.sourceforge.net v 1.50 T=12.0 s (209.2 files/s, 70472.1 lines/s) ------------------------------------------------------------------------------- Language files blank comment code ------------------------------------------------------------------------------- Perl 2052 110356 130018 292281 C 135 18718 22862 140483 C/C++ Header 147 7650 12093 44042 Bourne Shell 116 3402 5789 36882 Lisp 1 684 2242 7515 make 7 498 473 2044 C++ 10 312 277 2000 XML 26 231 0 1972 yacc 2 128 97 1549 YAML 2 2 0 489 DOS Batch 11 85 50 322 HTML 1 19 2 98 ------------------------------------------------------------------------------- SUM: 2510 142085 173903 529677 ------------------------------------------------------------------------------- Here is command line example: Usage: cloc-1.56.pl [options] <file(s)/dir(s)> | <set 1> <set 2> | <report files>
Sometime mailbox disallows u upload file exceed some limitation. you can split it before upload. here is one example: create test filetouch file1 file2; echo 1 > file1; echo 2 > file2; tar zvcf old.tar.gz file1 file2;
split file to segment with size as u wished split -b 50 old.tar.gz;
restore file from segmentscat xa* > new.tar.gz; verify the restoremd5sum old.tar.gz new.tar.gz Generally, the check result should be same.
In many cases, binaries can no longer dump core after calling setuid(). Under Linux it is possible to re-enable this with a system call.
e.g.
+#ifdef __linux__ +#include <sys/prctl.h> +#endif + #ifdef HAVE_purify #define IF_PURIFY(A,B) (A) #else @@ -1362,6 +1366,10 @@ sql_perror("setuid"); unireg_abort(1); } +#ifdef __linux__ + /* inform kernel that process is dumpable */ + prctl(PR_SET_DUMPABLE,1,0,0,0); +#endif /* __linux__ */ #endif Manual of prctl
PR_SET_DUMPABLE (Since Linux 2.4) Set the state of the flag determining whether core dumps are produced for this process upon delivery of a signal whose default behaviour is to produce a core dump. (Normally this flag is set for a process by default, but it is cleared when a set-user-ID or set-group-ID program is executed and also by various system calls that manipulate process UIDs and GIDs). In kernels up to and including 2.6.12, arg2 must be either 0 (process is not dumpable) or 1 (process is dumpable). Since kernel 2.6.13, the value 2 is also permitted; this causes any binary which normally would not be dumped to be dumped readable by root only. (See also the description of /proc/sys/fs/suid_dumpable in proc(5).)
Ref: http://bugs.mysql.com/bug.php?id=21723 Some files about linux core dump:
/proc/sys/fs/suid_dumpable /etc/profile /etc/security/limits.conf /proc/sys/kernel/core_pattern
0.5版本的sysbench已经支持multi-table Download src code bzr branch lp:sysbench make后就可以执行了 Percona对这个版本的sysbench的参数有个不错的wiki page: http://www.percona.com/docs/wiki/benchmark:sysbench:olpt.lua下面这个文章对0.5的sysbench有个比较全面的介绍: ./sysbench --mysql-host=$host1 --mysql-port=3306 --mysql-user=*** --mysql-password=*** --test=/path/to/sysbench/tests/db/oltp.lua --oltp-tables-count=$oltp_table_num --num-threads=$num_thread --max-requests=0 --max-time=$max_time prepare
./sysbench --mysql-host=$host1 --mysql-port=3306 --mysql-user=*** --mysql-password=*** --test=/path/to/sysbench/tests/db/oltp.lua --oltp-tables-count=$oltp_table_num --num-threads=$num_thread --max-requests=0 --max-time=$max_time run 补充下,如果要使用 --max-time这个参数, 需要配合--max-requests=0。 --max-requests=N limit for total number of requests [10000]
如果N=0将取消max-requests的限制
MySQL 5.5多出了两个新数据库 information_schema & performance_schema
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 但是用mysqldump去备份这两个数据库的时候都会报错
mysqldump: Got error: 1044: Access denied for user 'root'@'localhost' to database 'information_schema' when using LOCK TABLES
mysqldump: Got error: 1142: SELECT,LOCK TABL command denied to user 'root'@'localhost' for table 'cond_instances' when using LOCK TABLES 用--all-databases备份时也不会备份这两个数据库 官方文档里解释到:
http://dev.mysql.com/doc/refman/5.5/en/mysqldump.html 总结一下,如果一定要用mysqldump去备份这两个数据库不是不可以,但是得disable lock tables。 我用的是: --database information_schema --lock-tables=0
ssh-copy-id 避免ssh输入密码的用法:
[root@hengtiandesk118 .ssh]# ssh-copy-id -i id_rsa.pub 10.1.186.51 10 Warning: Permanently added '10.1.186.51' (RSA) to the list of known hosts. root@10.1.186.51's password: Now try logging into the machine, with "ssh '10.1.186.51'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[root@hengtiandesk118 .ssh]# ssh 10.1.186.51 Last login: Thu May 10 18:33:55 2012 from 10.5.4.201 [root@xen186v01 ~]#
这个脚本只要装了openssh-clients就可以用了。 以前都是手动复制粘贴的...
原来ssh可以这样用 1. remote file copy[root@xen74v01 ~]# cat test.pl #!/usr/bin/perl print "eth0.74"=~/(\w+)/; print "\n"; [root@xen74v01 ~]# cat test.pl | ssh 10.1.74.76 'cat - > /tmp/test.pl' 拷贝文件时,如果文件很大,又不想影响网络IO可以用pv工具进行流量控制 pv -L10m test.pl | ssh 10.1.74.76 'cat - > /tmp/test.pl' 这里pv的行为跟cat比较类似,但是支持IO流量控制,这里设置10M/s. 2. local script remote execute[root@xen74v01 ~]# cat test.pl #!/usr/bin/perl print "eth0.74"=~/(\w+)/; print "\n"; [root@xen74v01 ~]# perl test.pl eth0 [root@xen74v01 ~]# cat test.pl | ssh 10.1.74.76 'perl' eth0 [root@xen74v01 ~]# ssh 10.1.74.76 'perl' < test.pl eth0
这样就不用把脚本拷贝到远端去执行了 参考: http://linux.icydog.net/ssh/piping.php http://www.ivarch.com/programs/quickref/pv.shtml http://www.mysqlperformanceblog.com/2009/05/20/hint-throttling-xtrabackup/
1. Tutorial http://net-snmp.sourceforge.net/wiki/index.php/Tutorials2. Config & start agent snmpconf 3. snmpwalk example snmpwalk -v2c -c public 10.1.74.51 4. check MIB modules snmptranslate -Dinit_mib .1.3 2>&1 |grep MIBDIR 5. extending MIB Module http://net-snmp.sourceforge.net/wiki/index.php/TUT:Writing_a_MIB_Module a. download net-snmp src code b. MIB definition c. mib2c (in net-snmp-perl) d. make & make install e. edit snmpd conf & restart agent f. snmpwalk to verify
Install: http://www.graphviz.org/Download_linux_rhel.phpDocument: http://www.graphviz.org/Documentation.phpExample: $ cat cluster.dot digraph G {
subgraph cluster_0 { style=filled; color=lightgrey; node [style=filled,color=white]; a0 -> a1 -> a2 -> a3; label = "process #1"; }
subgraph cluster_1 { node [style=filled]; b0 -> b1 -> b2 -> b3; label = "process #2"; color=blue } start -> a0; start -> b0; a1 -> b3; b2 -> a3; a3 -> a0; a3 -> end; b3 -> end;
start [shape=Mdiamond]; end [shape=Msquare]; }
$ dot -Tpng cluster.dot -o cluster.png $ gnome-open cluster.png
More examples: http://www.graphviz.org/Gallery.php
iozone 是一个开源的文件系统benchmark 测试工具。可用来检测当前或者指定磁盘的读写性能。 http://www.iozone.org/ 安装 先确保rpmforge repository源已经安装。具体参考: http://www.blogjava.net/miaoyachun/archive/2012/02/03/369319.html
然后直接yum安装
yum install iozone 测试 iozone -i 0 -r 32 -s 2097152 Iozone: Performance Test of File I/O Version $Revision: 3.394 $ Compiled for 64 bit mode. Build: linux
Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner, Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone, Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root, Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer. Ben England.
Run began: Wed Apr 25 19:36:55 2012
Record Size 32 KB File size set to 2097152 KB Command line used: iozone -i 0 -r 32 -s 2097152 Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 2097152 32 844758 2001670
iozone test complete.
当innobackupex 做全备的时候(my version 1.6.5), 当备份到MyISAM数据时, innobackupex 会flush tables with read lock, 来禁止MyISAM的写操作. (假设没有--no-lock选项) sub backup {
if (!$option_incremental && !$option_no_lock) { # make a prep copy before locking tables, if using rsync backup_files(1);
# flush tables with read lock mysql_lockall(); }
if ($option_slave_info) { write_slave_info(); }
}
sub mysql_lockall {
if (compare_versions($mysql_server_version, '4.0.22') == 0 || compare_versions($mysql_server_version, '4.1.7') == 0) { # MySQL server version is 4.0.22 or 4.1.7 mysql_send "COMMIT;"; mysql_send "FLUSH TABLES WITH READ LOCK;"; } else { # MySQL server version is other than 4.0.22 or 4.1.7 mysql_send "FLUSH TABLES WITH READ LOCK;"; mysql_send "COMMIT;"; } write_binlog_info;
} 但是如果备份的时候还有很重的workload, "flush tables with read lock" 可能会比较耗时. 这里参考了: http://www.mysqlperformanceblog.com/2010/04/24/how-fast-is-flush-tables-with-read-lock/ 看了下--no-lock的选项说明: --no-lock Use this option to disable table lock with "FLUSH TABLES WITH READ LOCK". Use it only if ALL your tables are InnoDB and you DO NOT CARE about the binary log position of the backup. 如果我们能保证workload仅仅是innodb相关的,我们可以使用这个选项。 记得在1.5版本的时候,使用--no-lock选项会导致xtrabackup_slave_info没有保存备份时的logfile & pos. 这个问题在1.6.5被解决了 if ($option_slave_info) { write_slave_info(); } xtrabackup_slave_info & xtrabackup_binlog_info文件在1.5版本是在 mysql_lockall函数里更新的。但是新版本已经把 write_slave_info提到 mysql_lockall外面了。
Some time we may not want export complex hash to console but file, we can do like this to dump to file by var_export <?php
$a = array('abc'=>"123");
# var dump var_dump($a);
# var_export echo var_export($a);
# export to file $b = var_export($a, true); error_log($b."\n", 3, "/tmp/ymiao.log"); Result: cat /tmp/ymiao.log array ( 'abc' => '123', )
When restore by xtrabackup '--copy-back' with version 1.6.4, you may get error: IMPORTANT: Please check that the copy-back run completes successfully. At the end of a successful copy-back run innobackupex-1.6.4 prints "completed OK!".
Original data directory is not empty! at innobackupex-1.6.4 line 544. when read those lines we found that: # check that original data directories exist and they are empty if_directory_exists_and_empty($orig_datadir, "Original data"); if_directory_exists_and_empty($orig_ibdata_dir, "Original InnoDB data"); if_directory_exists_and_empty($orig_iblog_dir, "Original InnoDB log"); Google for reason for this check and find it mentioned in: http://www.mysqlperformanceblog.com/2011/12/19/percona-xtrabackup-1-6-4/ innobackupex did not check that MySQL datadir was empty before –copy-back was run. With this bug fix, innobackupex will now error out of the –copy-back operation if the destination is not empty, avoiding potential data loss or a strang combination of a restored backup and previous data. Bug Fixed: #737569 (Valentine Gostev)
1. install bzr tool, ref http://dev.mysql.com/doc/refman/5.1/en/installing-development-tree.html 2. download mysql 5.1 code from trunk 3. autoreconf --force --install 4. ./configure --with-debug --without-libedit --with-plugins=innobase 5. make & sudo make install 6. setup eclipse cdt env a. startup eclipse by sudo cmd or root user b. build project c. set debug diag, ref http://forge.mysql.com/wiki/Eclipse/CDT_on_Linux_and_Mac_OS_X, here is my "program parameters" when startup mysqld instance: --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --port=3306 --socket=/tmp/mysql.sock --default-storage-engine=innodb
1. install vncserver sudo yum install vnc-server 2. set password for login user vncpasswd 3. start one server to generate setting file ~/.vnc/xstartup vncserver :1 4. edit ~/.vnc/xstartup as suggestion of comments to uncomment two lines: unset SESSION_MANAGER exec /etc/X11/xinit/xinitrc 5. restart the server vncserver -kill :1 vncserver -geometry 1200x900 -depth 16 :1 宽屏的话 vncserver -geometry 1366x768 -depth 16 :1, 如果不顺眼 可以微调,我当前的最佳分辨率是 vncserver -geometry 1346x680 -depth 16 :1 6. access by vncviewer vncviewer SERVER_IP:5901 Ref: VNC How TO
# parepare test database createDB="drop database if exists $database; create database if not exists $database;" mysql -h $vip --port=$port -uadmin -padmin -e "$createDB"
# prepare data prepare="$sysbench --test=oltp --mysql-table-engine=innodb --oltp-table-size=$row" prepare="$prepare --mysql-port=$port --mysql-host=$vip --mysql-db=$database" prepare="$prepare --mysql-password=admin --mysql-user=admin prepare" $prepare
# run sysbench # http://sysbench.sourceforge.net/docs/#database_mode see this link for more options run="$sysbench --test=oltp --mysql-table-engine=innodb --init-rng --oltp-table-size=$row" run="$run --max-requests=0 --max-time=900 --num-threads=128 --oltp-test-mode=complex" run="$run --oltp-point-selects=2 --oltp-simple-ranges=1 --oltp-sum-ranges=2" run="$run --oltp-index-updates=10 --oltp-non-index-updates=5 --mysql-port=$port" run="$run --mysql-db=$database --mysql-host=$vip --mysql-password=admin --mysql-user=admin"
# support oltp-user-delay-min to add delay for each sysbench request if [[ "$lag" != "nolag" ]] then run="$run --oltp-user-delay-min=$lag" fi run="$run run"
参考 jmap & jhat通过分析heap中对象的数量还有大小可以定位哪个类出了问题。
1. 尽量避免抛出异常 异常是有代价的,比如尽量避免使用异常来实现流程控制 2. 尽量处理异常 有能力处理异常则处理掉,不然外层函数会累积太多的异常 3. 处理不了则抛出异常 自己问自己,这个异常能够处理么,不行的话直接抛出,可以参考原则4 4. Throw early and catch late 一般底层函数不会处理异常,外层函数会根据上下文捕获异常进行处理或者转换 5. 不要覆盖异常 6. try块不应该太大(代码规范) 7. 函数抛出的异常不应该太多(代码规范) 参考
Ref http://ubuntudaily.net/archives/15Enhanced version trim() { trimmed=$1 trimmed=${trimmed%%\s*} trimmed=${trimmed##\s*} echo "$trimmed" } var=" a bc " var=$(trim $var)
同事写的auto ssh login script: #!/usr/bin/expect -f # by gwang
# default password list array set passwd { 0 "password1" 1 "password2" 2 "password3" }
# try login spawn $env(SHELL) match_max 100000 send -- "ssh -p $port $user@$ip\r" foreach i [array names passwd] { expect { "(yes/no)" { send -- "yes\r" exp_continue } "password:" { send -- "$passwd($i)\r" } "Last login" { break } } } interact
由于ssh client默认支持的密码错误重试是3, 所以这里只支持3个备选密码。 Google for "ssh client password retry" and find link which could help: ssh login retry 介绍了只要修改ssh client配置文件里/etc/ssh/ssh_config的NumberOfPasswordPrompts选项就可以了。无需重启sshd...
Disable/Enable PORT #disable port 29600 iptables -I INPUT -p tcp --dport 29600 -j DROP iptables -I OUTPUT -p tcp --dport 29600 -j DROP
#enable port 29600 after disabled
iptables -D INPUT -p tcp --dport 29600 -j DROP iptables -D OUTPUT -p tcp --dport 29600 -j DROP Block Ipaddress # Block comming packets of ipaddress, then all packets come from this address will be dropped iptables -A INPUT -s 192.168.1.5 -j DROP # Block outgoing packets of ipaddress, then all packets sent to that address will be dropped iptables -A OUTPUT -p tcp -d 192.168.1.2 -j DROP Disable NIC traffic # disable iptables -A INPUT -jDROP -i eth1 iptables -A OUTPUT -jDROP -o eth1
# enable back iptables -D INPUT -jDROP -i eth1 iptables -D OUTPUT -jDROP -o eth1
links http://wiki.centos.org/HowTos/Network/IPTableshttp://www.thegeekstuff.com/2011/06/iptables-rules-examples/
Official ref: innodb_flush_log_at_trx_commit这个参数的配置涉及trax提交写trax log文件的行为 innodb_flush_log_at_trx_commit = 0 # 每秒写一次trax log,并执行fsync
innodb_flush_log_at_trx_commit = 1 # 每次trax 提交的时候写一次trax log, 并执行fsync
innodb_flush_log_at_trx_commit = 2 # 每次trax 提交的时候写一次trax log, 不会执行fsync
改坏/etc/fstab文件会导致linux系统启动时fsck失败,从而进入"repair filesystem". 解决方法是: 1. 执行 mount -o remount rw / 进入读写模式 2. edit /etc/fstab to correct typo or invalid setting 3. exit repair filesystem to reboot
This is gnome behavior if you find some device volumes created on your gnome desktop. Ref: fedora 8/9/10的gnome下解决自动挂载windows分区的最佳办法
|