kafka源码什么写的_kafka源码值得读吗

hacker|
121

文章目录:

apache kafka源码怎么编译

Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.(Kafka是一个分布式的、可分区的(partitioned)、基于备份的(replicated)和commit-log存储的服务.。它提供了类似于messaging system的特性,但是在设计实现上完全不同)。kafka是一种高吞吐量的分布式发布订阅消息系统,它有如下特性:

(1)、通过O(1)的磁盘数据结构提供消息的持久化,这种结构对于即使数以TB的消息存储也能够保持长时间的稳定性能。

(2)、高吞吐量:即使是非常普通的硬件kafka也可以支持每秒数十万的消息。

(3)、支持通过kafka服务器和消费机集群来分区消息。

(4)、支持Hadoop并行数据加载。

一、用Kafka里面自带的脚本进行编译

下载好了Kafka源码,里面自带了一个gradlew的脚本,我们可以利用这个编译Kafka源码:

1 # wget

2 # tar -zxf kafka-0.8.1.1-src.tgz

3 # cd kafka-0.8.1.1-src

4 # ./gradlew releaseTarGz

运行上面的命令进行编译将会出现以下的异常信息:

01 :core:signArchives FAILED

02

03 FAILURE: Build failed with an exception.

04

05 * What went wrong:

06 Execution failed for task ':core:signArchives'.

07 Cannot perform signing task ':core:signArchives' because it

08 has no configured signatory

09

10 * Try:

11 Run with --stacktrace option to get the stack trace. Run with

12 --info or --debug option to get more log output.

13

14 BUILD FAILED

这是一个bug(),可以用下面的命令进行编译

1 ./gradlew releaseTarGzAll -x signArchives

这时候将会编译成功(在编译的过程中将会出现很多的)。在编译的过程中,我们也可以指定对应的Scala版本进行编译:

1 ./gradlew -PscalaVersion=2.10.3 releaseTarGz -x signArchives

编译完之后将会在core/build/distributions/里面生成kafka_2.10-0.8.1.1.tgz文件,这个和从网上下载的一样,可以直接用。

二、利用sbt进行编译

我们同样可以用sbt来编译Kafka,步骤如下:

01 # git clone

02 # cd kafka

03 # git checkout -b 0.8 remotes/origin/0.8

04 # ./sbt update

05 [info] [SUCCESSFUL ] org.eclipse.jdt#core;3.1.1!core.jar (2243ms)

06 [info] downloading ...

07 [info] [SUCCESSFUL ] ant#ant;1.6.5!ant.jar (1150ms)

08 [info] Done updating.

09 [info] Resolving org.apache.hadoop#hadoop-core;0.20.2 ...

10 [info] Done updating.

11 [info] Resolving com.yammer.metrics#metrics-annotation;2.2.0 ...

12 [info] Done updating.

13 [info] Resolving com.yammer.metrics#metrics-annotation;2.2.0 ...

14 [info] Done updating.

15 [success] Total time: 168 s, completed Jun 18, 2014 6:51:38 PM

16

17 # ./sbt package

18 [info] Set current project to Kafka (in build file:/export1/spark/kafka/)

19 Getting Scala 2.8.0 ...

20 :: retrieving :: org.scala-sbt#boot-scala

21 confs: [default]

22 3 artifacts copied, 0 already retrieved (14544kB/27ms)

23 [success] Total time: 1 s, completed Jun 18, 2014 6:52:37 PM

对于Kafka 0.8及以上版本还需要运行以下的命令:

01 # ./sbt assembly-package-dependency

02 [info] Loading project definition from /export1/spark/kafka/project

03 [warn] Multiple resolvers having different access mechanism configured with

04 same name 'sbt-plugin-releases'. To avoid conflict, Remove duplicate project

05 resolvers (`resolvers`) or rename publishing resolver (`publishTo`).

06 [info] Set current project to Kafka (in build file:/export1/spark/kafka/)

07 [warn] Credentials file /home/wyp/.m2/.credentials does not exist

08 [info] Including slf4j-api-1.7.2.jar

09 [info] Including metrics-annotation-2.2.0.jar

10 [info] Including scala-compiler.jar

11 [info] Including scala-library.jar

12 [info] Including slf4j-simple-1.6.4.jar

13 [info] Including metrics-core-2.2.0.jar

14 [info] Including snappy-java-1.0.4.1.jar

15 [info] Including zookeeper-3.3.4.jar

16 [info] Including log4j-1.2.15.jar

17 [info] Including zkclient-0.3.jar

18 [info] Including jopt-simple-3.2.jar

19 [warn] Merging 'META-INF/NOTICE' with strategy 'rename'

20 [warn] Merging 'org/xerial/snappy/native/README' with strategy 'rename'

21 [warn] Merging 'META-INF/maven/org.xerial.snappy/snappy-java/LICENSE'

22 with strategy 'rename'

23 [warn] Merging 'LICENSE.txt' with strategy 'rename'

24 [warn] Merging 'META-INF/LICENSE' with strategy 'rename'

25 [warn] Merging 'META-INF/MANIFEST.MF' with strategy 'discard'

26 [warn] Strategy 'discard' was applied to a file

27 [warn] Strategy 'rename' was applied to 5 files

28 [success] Total time: 3 s, completed Jun 18, 2014 6:53:41 PM

当然,我们也可以在sbt里面指定scala的版本:

01 !--

02 User: 过往记忆

03 Date: 14-6-18

04 Time: 20:20

05 bolg:

06 本文地址:

07 过往记忆博客,专注于hadoop、hive、spark、shark、flume的技术博客,大量的干货

08 过往记忆博客微信公共帐号:iteblog_hadoop

09 --

10 sbt "++2.10.3 update"

11 sbt "++2.10.3 package"

12 sbt "++2.10.3 assembly-package-dependency"

基于librdkafka库写的源代码怎么编译

可以放在当前目录下,但是要设置一下库文件的路径:LD_LIBRARY_PATH=./:/usr/local/pet20/lib:/lib/:/usr/local/lib export LD_LIBRARY_PATH 这样,在调用的时候就会自动从当前目录找。 如果是显式调用则不用,只要在程序里指定.so的文件路径就可以了。所以放在当前目录下也是没问题的。

kafka 有java的源码吗

我这里是使用的是,kafka自带的zookeeper。

以及关于kafka的日志文件啊,都放在默认里即/tmp下,我没修改。保存默认的

1、 [hadoop@sparksinglenode kafka_2.10-0.8.1.1]$ jps

2625 Jps

2、 [hadoop@sparksinglenode kafka_2.10-0.8.1.1]$ bin/zookeeper-server-start.sh config/zookeeper.properties

此刻,这时,会一直停在这,因为是前端运行。

另开一窗口,

3、 [hadoop@sparksinglenode kafka_2.10-0.8.1.1]$ bin/kafka-server-start.sh config/server.properties

也是前端运行。

学习apache kafka源码剖析需要什么基础

先搞清楚STL怎么用并大量使用相当长的时间,代码风格尽量STL化(这个真是看STL源码的前提,我就是受不了全是模板和迭代器的代码,所以至今没去研究STL源码)

还有,现在对“基础较好”、“熟练”、“精通”之类的词本能的不信任

kafka技术内幕与apache kafka源码剖析看哪一本好,为什么?

Jafka/Kafka

Kafka是Apache下的一个子项目,是一个高性能跨语言分布式Publish/Subscribe消息队列系统,而Jafka是在Kafka之上孵化而来的,即Kafka的一个升级版。具有以下特性:快速持久化,可以在O(1)的系统开销下进行消息持久化;高吞吐,在一台普通的服务器上既可以达到10W/s的吞吐速率;完全的分布式系统,Broker、Producer、Consumer都原生自动支持分布式,自动实现复杂均衡;支持Hadoop数据并行加载,对于像Hadoop的一样的日志数据和离线分析系统,但又要求实时处理的限制,这是一个可行的解决方案。Kafka通过Hadoop的并行加载机制来统一了在线和离线的消息处理,这一点也是本课题所研究系统所看重的。Apache Kafka相对于ActiveMQ是一个非常轻量级的消息系统,除了性能非常好之外,还是一个工作良好的分布式系统。

其他一些队列列表HornetQ、Apache Qpid、Sparrow、Starling、Kestrel、Beanstalkd、Amazon SQS就不再一一分析。

4条大神的评论

  • avatar
    访客 2022-07-03 下午 06:48:46

    conflict, Remove duplicate project 05 resolvers (`resolvers`) or rename publ

  • avatar
    访客 2022-07-03 下午 05:03:34

    red with 04 same name 'sbt-plugin-releases'. To avoid conflict, Remove duplicate project 05 resolvers (`re

  • avatar
    访客 2022-07-03 下午 03:22:31

    7 [info] [SUCCESSFUL ] ant#ant;1.6.5!ant.jar (1150ms) 08 [info] Done updating. 09 [info] Resolving org.ap

  • avatar
    访客 2022-07-03 下午 04:41:56

    access mechanism configured with 04 same name 'sbt-plugin-releases'. To avoid

发表评论