Giter Site home page Giter Site logo

ngx_http_dyups_module's People

Contributors

antrusd avatar chobits avatar cuber avatar fdintino avatar gnought avatar iandyh avatar sarahwang avatar spacewander avatar urey-hiker avatar wenchaomeng avatar wzxjohn avatar xadillax avatar yejingx avatar yzprofile avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ngx_http_dyups_module's Issues

ngx_worker_process_init的时候调用了exit_process??

hi
遇到一个不能解释的问题,看core栈,在woker_process_init的时候调用了exit_process
#9 0x0000000001b57dc0 in ?? ()
#10 0x0000000001a6edd8 in ?? ()
#11 0x0000000000426a86 in ngx_destroy_pool (pool=0x3) at src/core/ngx_palloc.c:53
#12 0x00000000004dab26 in ngx_http_dyups_exit_process (cycle=0x9a60) at ../nginx-addons.git/ngx_http_dyups_module/ngx_http_dyups_module.c:336
#13 0x00000000004439f6 in ngx_worker_process_init (cycle=, worker=) at src/os/unix/ngx_process_cycle.c:939
#14 0x000000000000000b in ?? ()

module结构体没变啊

ngx_module_t ngx_http_dyups_module = {
NGX_MODULE_V1,
&ngx_http_dyups_module_ctx, /* module context /
ngx_http_dyups_commands, /
module directives /
NGX_HTTP_MODULE, /
module type /
NULL, /
init master /
NULL, /
init module /
NULL, /
init process /
NULL, /
init thread /
NULL, /
exit thread /
ngx_http_dyups_exit_process, /
exit process /
NULL, /
exit master */
NGX_MODULE_V1_PADDING
};

The servers in the upstream are getting replaced

Hi,

Looks like it is replacing the old upstream servers while adding a new one.

demo_1:~/dyups$ curl 127.0.0.1:8081/detail/
mywebserver
server 10.0.1.14:80

test2
server 10.0.1.15:80

demo_1:~/dyups$ curl -d "server 10.0.1.19:80;" 127.0.0.1:8081/upstream/mywebserver2
success

I added .19 server above

demo_1:~/dyups$ curl 127.0.0.1:8081/detail/
mywebserver
server 10.0.1.14:80
test2
server 10.0.1.15:80
mywebserver2
server 10.0.1.19:80

Now I am adding .20

demo_1:~/dyups$ curl -d "server 10.0.1.20:80;" 127.0.0.1:8081/upstream/mywebserver2
success

demo_1:~/dyups$ curl 127.0.0.1:8081/detail/
mywebserver
server 10.0.1.14:80

test2
server 10.0.1.15:80

mywebserver2
server 10.0.1.20:80

I am seeing .20 now, but where is .19 ? It has gone.

Can you please see this and tell me what could be wrong.

Thanks. Santos

error in Calling C from Lua

typedef int (*lua_CFunction) (lua_State *L);
These Cfunction returns an integer with the number of values it is returning in the stack.
static int
ngx_http_lua_update_upstream(lua_State *L)
{
size_t size;
ngx_int_t status;
ngx_str_t name, rv;
ngx_buf_t buf;

if (lua_gettop(L) != 2) {
    return luaL_error(L, "exactly 2 arguments expected");
}

name.data = (u_char *) luaL_checklstring(L, 1, &name.len);
buf.pos = buf.start = (u_char *) luaL_checklstring(L, 2, &size);
buf.last = buf.end = buf.pos + size;

status = ngx_dyups_update_upstream(&name, &buf, &rv);

lua_pushlstring(L, (char *) rv.data, rv.len);
lua_pushinteger(L, (lua_Integer) status);

return 1;

}

should be:

lua_pushinteger(L, (lua_Integer) status);
lua_pushlstring(L, (char *) rv.data, rv.len);

return 2;

dyups_upstream_conf参数作用是什么

请问下dyups_upstream_conf参数是做什么用的,测试发现有include就行,有没有dyups_upstream_conf都会将upstream导入进来,那么dyups_upstream_conf到底是做什么用的,百思不得其解

dyups count 计数统计问题

hi
ngx_http_dyups_srv_conf_t;中的count应该是统计该upstream上的剩余的请求数的吧?

在get_peer中count++,如果一个upstream存在多个server,proxy_next_upstream在遇到timeout等时会多次调用get_peer函数,导致count++多次,而count--仅发生在clean_request函数,会导致count值增减不一致。

是不是应该放到free_peer函数处?

compile failed with nginx-1.5.3

Hi,

I am having trouble to compile the module with nginx-1.5.3

It would be great if you could fix that.

cc -c -pipe -O2 -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs -I src/http -I src/http/modules -I src/http/modules/perl -I src/mail
-o objs/addon/ngx_http_dyups_module/ngx_http_dyups_module.o
nginx-1.5.3/../ngx_http_dyups_module/ngx_http_dyups_module.c
nginx-1.5.3/../ngx_http_dyups_module/ngx_http_dyups_module.c: In function ‘ngx_http_dyups_init’:
nginx-1.5.3/../ngx_http_dyups_module/ngx_http_dyups_module.c:458:24: error: ‘dscf’ may be used uninitialized in this function [-Werror=uninitialized]
cc1: all warnings being treated as errors
make[1]: *** [objs/addon/ngx_http_dyups_module/ngx_http_dyups_module.o] Error 1
make[1]: Leaving directory `nginx-1.5.3'
make: *** [build] Error 2

Deleted upstream still effective.

Hi, @yzprofile Thanks a lot for the module, it's exactly what I am looking for.

I am just trying it with nginx 1.4.7 stable. I put all my upstreams into conf/upstreams.conf and add dyups_upstream_conf upstreams.conf; include upstreams.conf; into the http block in my nginx.conf. And setup dyups_interface at 127.0.0.1:8081

It's working well when I try calling curl 127.0.0.1:8081/detail, and try deleting one of my upstreams curl -i -X DELETE 127.0.0.1:8081/upstream/balzac, and it disappears if I call detail again after the deletion. But I could still get correct response if I call the server using that upstream, which I suppose it would end up in failure since the upstream is deleted.

Am I doing it right? Thank you very much.

keepalive模块可能导致出core

hi
又来了。发现一个core

(gdb) bt
#0 0x000000000048f3f1 in ngx_http_upstream_keepalive_close_handler (ev=) at src/http/modules/ngx_http_upstream_keepalive_module.c:405
#1 0x0000000000445d31 in ngx_epoll_process_events (cycle=0x1cf7790, timer=, flags=) at src/event/modules/ngx_epoll_module.c:683
#2 0x000000000043c8c7 in ngx_process_events_and_timers (cycle=0x1cf7790) at src/event/ngx_event.c:249
#3 0x0000000000443c99 in ngx_worker_process_cycle (cycle=0x1cf7790, data=) at src/os/unix/ngx_process_cycle.c:807
#4 0x0000000000442695 in ngx_spawn_process (cycle=0x1cf7790, proc=0x443be0 <ngx_worker_process_cycle>, data=0x3, name=0xab9696 "worker process", respawn=3)

at src/os/unix/ngx_process.c:198

#5 0x0000000000445031 in ngx_reap_children (cycle=0x1cf7790) at src/os/unix/ngx_process_cycle.c:619
#6 ngx_master_process_cycle (cycle=0x1cf7790) at src/os/unix/ngx_process_cycle.c:180
#7 0x0000000000424e51 in main (argc=, argv=) at src/core/nginx.c:412

(gdb) f 0
#0 0x000000000048f3f1 in ngx_http_upstream_keepalive_close_handler (ev=) at src/http/modules/ngx_http_upstream_keepalive_module.c:405

405 in src/http/modules/ngx_http_upstream_keepalive_module.c
(gdb) info locals
conf = 0x729edabeed699ee8
item = 0x1dca0f0
n =
buf = ""
c = 0x7fe620c98a90
(gdb) p item
$1 = {conf = 0x729edabeed699ee8, queue = {prev = 0x11a01ef97ce0b282, next = 0xeaaa87dd7ecdb0d3}, connection = 0x849a5e417e6e7345, socklen = 2680762079,
sockaddr = "P\374TӬ\347\360o\202\257\255\065Qo-\255\340G\222\335\301\336%\210\345I\347\200Tp}+\313\315\343z
}S\374\317c&\223U\245\005\263_\221\310ťk6wms\245]Z\352\326\330\313C4\242\071T\372)\350\177\034W\312B\233n\351\237M*\260k\226i\243\023\306#Q\326a\215\356\274\031(\206\070\233\315\373dh\212", <incomplete sequence \355\236>}
(gdb) p *conf
Cannot access memory at address 0x729edabeed699ee8

这个core文件可以看出是在close_handler时原来指向的内容被删除了,查询error日志也确认了这个事情在core,出现前一分钟左右删除了这个upstream。
2014/07/23 07:16:22 [error] 11671#0:deleted upstream

看了下代码,大致在ngx_http_upstream_free_keepalive_peer函数中会注册 c->read->handler = ngx_http_upstream_keepalive_close_handler;

此时即使在dyups中ref为0,也不应该删除,否则超时时间到了来close_handler时也会出现问题。

还是等您高招了!!

max_conns not support

server 127.0.0.1:81; server 127.0.0.1:82 max_conns=1;

server return 502, remove and return 200

nginx log

invalid parameter "max_conns=1" in command line

Using Dyups causes core dump

I can't seem to use Dyups in production with Nginx 1.7.6. My configuration isn't all that interesting; some of the video streaming stuff has been stripped out:

nginx version: nginx/1.7.6
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) 
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log 
--pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock 
--http-client-body-temp-path=/var/cache/nginx/client_temp 
--http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
 --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp 
--user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module 
--with-http_gunzip_module --with-http_gzip_static_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module
 --with-file-aio --with-ipv6 --with-http_spdy_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic'
 --add-module=/root/rpmbuild/BUILD/nginx-1.7.6/headers-more-nginx-module-0.25 --add-module=/root/rpmbuild/BUILD/nginx-1.7.6/yzprofile-ngx_http_dyups_module-0b0d978

I have followed your sample implementation and upstream starts as expected. After passing a cURL to the dyups interface to modify the upstream configuration however, nginx dumps core. If dyups_upstream_conf is used, the server continues trying to load dynamically and only serves an empty request. If upstream_conf is absent the server continues to segfault.

There is a full backtrace of the crash that I've posted at https://gist.github.com/bradq/c49b596266e8e4d201fb and can certainly post my configuration though it's not terribly unusual save for some SSL session caching and SPDY (removing both from the configuration did not help). Any input? Thanks much in advance.

dyups mark up/down upstream server reports abnormal exits error

在 mark up/down 一台 upstream server 的时候,发现 error log 里 dyups 报出以下的错误:

2015/08/12 19:27:54 [alert] 23935#0: [dyups] process start after abnormal exits
2015/08/12 19:27:56 [crit] 23935#0: open() "/opt/app/nginx/conf/dyupstream.conf" failed (2: No such file or directory)
2015/08/12 19:27:56 [crit] 23935#0: [dyups] process restore upstream failed

nginx.conf 下配置
dyups_upstream_conf conf/dyupstream.conf 这个文件是不存在的。

我们发现,在 mark down 一台 server 的时候,命令在部分机器上生效,而在另一部分机器上失败。但是 在失败的机器上 dyups 仍旧返回 200。同时,在失败的机器上 healthcheck module 返回的 status 结果是 up。也就是说这个命令确实没有生效。

请问这个问题是否会是 dyupstream.conf 文件不存在导致的不稳定?
dyups 更改的 upstream 模型是否处于 shared_memory 中,如果 respawn 一个新的进程,是否会导致 upstream 内存中的更改丢失。

另外,dyups_upstream_conf 是否支持path的正则匹配,例如 upstream_confs/*

coredump when run in cache manager process

init_worker_by_lua 'dyups.update...'
#0  0x0000000000516890 in ngx_http_dyups_read_msg_locked (ev=0x7d0140)
    at ../ngx_http_dyups_module/ngx_http_dyups_module.c:1957
#1  0x0000000000514d0e in ngx_dyups_update_upstream (name=0x7fff4088de70, 
    buf=0x7fff4088de10, rv=0x7fff4088de60)
    at ../ngx_http_dyups_module/ngx_http_dyups_module.c:1179
#2  0x00000000005175df in ngx_http_lua_update_upstream (L=0x41da7860)
    at ../ngx_http_dyups_module/ngx_http_dyups_lua.c:27
#3  0x0000000000535ec7 in lj_BC_FUNCC ()
#4  0x00000000004e6b8b in ngx_http_lua_run_thread (L=0x41d96378, r=0xf55510, 
    ctx=0xf55a40, nrets=1) at ../lua-nginx-module/src/ngx_http_lua_util.c:1015
#5  0x00000000005000cd in ngx_http_lua_socket_tcp_resume_helper (r=0xf55510, 
    socket_op=1) at ../lua-nginx-module/src/ngx_http_lua_socket_tcp.c:5190
#6  0x00000000004fff0f in ngx_http_lua_socket_tcp_read_resume (r=0xf55510)
    at ../lua-nginx-module/src/ngx_http_lua_socket_tcp.c:5122
#7  0x00000000004ecea0 in ngx_http_lua_content_wev_handler (r=0xf55510)
    at ../lua-nginx-module/src/ngx_http_lua_contentby.c:131
#8  0x00000000004fb4bc in ngx_http_lua_socket_handle_read_success (r=0xf55510, 
    u=0x41db36e8) at ../lua-nginx-module/src/ngx_http_lua_socket_tcp.c:2952
#9  0x00000000004f946f in ngx_http_lua_socket_tcp_read (r=0xf55510, 
    u=0x41db36e8) at ../lua-nginx-module/src/ngx_http_lua_socket_tcp.c:2071
#10 0x00000000004fae84 in ngx_http_lua_socket_read_handler (r=0xf55510, 
    u=0x41db36e8) at ../lua-nginx-module/src/ngx_http_lua_socket_tcp.c:2752
#11 0x00000000004fad0e in ngx_http_lua_socket_tcp_handler (ev=0xf94490)
---Type <return> to continue, or q <return> to quit---
    at ../lua-nginx-module/src/ngx_http_lua_socket_tcp.c:2703
#12 0x000000000044ee81 in ngx_epoll_process_events (cycle=0xf4cd20, 
    timer=59996, flags=1) at src/event/modules/ngx_epoll_module.c:822
#13 0x000000000043fabc in ngx_process_events_and_timers (cycle=0xf4cd20)
    at src/event/ngx_event.c:242
#14 0x000000000044db2d in ngx_cache_manager_process_cycle (cycle=0xf4cd20, 
    data=0x7bcfe0) at src/os/unix/ngx_process_cycle.c:1141
#15 0x000000000044931e in ngx_spawn_process (cycle=0xf4cd20, 
    proc=0x44d981 <ngx_cache_manager_process_cycle>, data=0x7bcfe0, 
    name=0x577983 "cache loader process", respawn=-1)
    at src/os/unix/ngx_process.c:198
#16 0x000000000044bab2 in ngx_start_cache_manager_processes (cycle=0xf4cd20, 
    respawn=0) at src/os/unix/ngx_process_cycle.c:413
#17 0x000000000044af0d in ngx_master_process_cycle (cycle=0xf4cd20)
    at src/os/unix/ngx_process_cycle.c:132
#18 0x0000000000416e59 in main (argc=3, argv=0x7fff4088e888)
    at src/core/nginx.c:359
(gdb) 
(gdb) p ngx_pid
$1 = 9228
(gdb) p ev->log
$2 = (ngx_log_t *) 0x0
(gdb) p ev
$3 = (ngx_event_t *) 0x7d0140
(gdb) p *ev
$4 = {
  data = 0x0, 
  write = 0, 
  accept = 0, 
  instance = 0, 
  active = 0, 
  disabled = 0, 
  ready = 0, 
  oneshot = 0, 
  complete = 0, 
  eof = 0, 
  error = 0, 
  timedout = 0, 
  timer_set = 0, 
  delayed = 0, 
  deferred_accept = 0, 
  pending_eof = 0, 
  posted = 0, 
  closed = 0, 
  channel = 0, 
  resolver = 0, 
  cancelable = 0, 
  available = 0, 
---Type <return> to continue, or q <return> to quit--- 
  handler = 0, 
  index = 0, 
  log = 0x0, 
  timer = {
    key = 0, 
    left = 0x0, 
    right = 0x0, 
    parent = 0x0, 
    color = 0 '\000', 
    data = 0 '\000'
  }, 
  queue = {
    prev = 0x0, 
    next = 0x0
  }
}

Can not effected with Nginx 1.7.4

Nginx 1.7.4中无效,是不支持吗?

Rest URL可以修改Upstream并且Rest URL查看也正确修改, 但Proxy pass请求依然传递给以前的主机

查询server数据不准确

若server使用域名(无论是配置文件写的upstream还是通过动态upstream生成的),通过detail或upstream/host1看的时候只能看到其中一个IP,如:

curl -d "server www.baidu.com;" 127.0.0.1:8080/upstream/www.baidu.com
success

curl 127.0.0.1:8080/upstream/www.baidu.com # 通过nslookup可以知道www.baidu.com会解析出2个IP:115.239.211.110和115.239.210.27,但这样查出来的只有一个
server 115.239.210.27:80

curl -I -H"Host: www.baidu.com" 127.0.0.1:80
抓包发现其实效果2个IP轮询:
server 115.239.210.27:80
server 115.239.211.110:80

check_index value is changing when running on multiprocess mode

I was debugging an issue and found that the check_index as returned from the procedure ngx_http_upstream_check_add_dynamic_peer gets altered when the number of worker process is set to greater than one.

I am doing the following -

  1. I had two servers, 10.0.1.13 and 10.0.1.14
  2. Now, 14 has gone down, so I am replacing with .15. So, I sent the command with upstream configuration as 10.0.1.13 an 10.0.1.15

After adding few debugs, I narrowed down that the procedure

ngx_http_upstream_check_add_dynamic_peer returns the check _index as 0 and 1 during dynamic upstream server configuration when the number of worker process is set to 1. But, the moment I set to 2, it returns the check_index as 0 and 2.

I am not sure where is this changing...Just wanted to tell you such that you can have a look at it so.

Regards, Santos

When used domain in the upstream , there are some problems

hi @yzprofile:

场景:

 当我使用如下配置的时候,访问相应的api: /detail 或 /upstream/bar其结果都是:
 bar server 61.135.169.121:80 server 127.0.0.1:80 server 127.0.0.1:81

问题:

 为什么查看到的域名www.baidu.com的ip只有一个,实际是有两个的,此时dynamic upstream dns其实是生效的,因为访问的时候是代理到bar然后看日志都可以看到该域名对应的两个ip. 

关联:

 对了,我最近在agentzh的ngx_lua_upstream_module的基础上实现了针对于某一个upstream的members的动态管理(https://github.com/SinaMSRE/lua-upstream-ngianx-module),但是现在不足的是:只是对当前的work进程生效,其他的work进程相应信息没有同步,(我实现的基本原理就是找到相应upstream后段peers的数据结构,然后在其指针指向的空间去增加server或是删除server,开始的时候几个work进程从master进程进程过来的空间由于没有写数据,所以是共享的,但是现在我对每一个work进程的相应空间修改了就发生了"写时拷贝",所以就只有一个work进程真正的生效了) 不知道你有没有什么好的办法来解决这个? 多谢指导 :).
目前自己想了两个:
(1)利用其进程间通信进行work进程之间信息同步.
(2)使用一块共享类存,要更新的数据都放在share mm上,然后work进程定时向mm拉取.

配置文件:

server {
    listen   800;
    location / { 
        set $test               "bar";
        proxy_pass           http://$test;
    }
}   

upstream bar {
server www.baidu.com;
server 127.0.0.1:80;
server 127.0.0.1:81;
}

with health check

Hi,

Thank you for developing this module. I have been looking for this quite sometime.

Could you please clarify, what is that it works and what does not. Also, I could not see Tengine including this module.

It can not work with common nginx_upstream_check_module, if you still want to use this module, you can try this branch of Tengine witch contains a patched upstream check module.

Please clarify.

Thanks, Santos

Performance impact

Thanks for adding support to dynamically manipulate upstream backend config.
Is there any performance impact when Nginx is built with this module? I see it's logging slot of data on upstream every second.

make failed, ngx_dyups_mark_upstream_delete undefined

Hello, any suggestion? thanks

objs/addon/ngx_http_dyups_module/ngx_http_dyups_module.o: In function `ngx_dyups_mark_upstream_delete':
/data/compile/nginx/bundle/ngx_http_dyups_module/ngx_http_dyups_module.c:1626: undefined reference to `ngx_http_upstream_check_delete_dynamic_peer'
collect2: ld returned 1 exit status
make[2]: *** [objs/nginx] Error 1
make[2]: Leaving directory `/data/compile/nginx/build/nginx-1.9.7'
make[1]: *** [build] Error 2
make[1]: Leaving directory `/data/compile/nginx/build/nginx-1.9.7'
make: *** [all] Error 2
make failed

Dockerfile error with git apply of patch

The docker build fails on the wget of the 2.0.3 version of the patch.

The patch file is not versioned but even if you change the Dockerfile to just use the .patch filename as is the docker build still fails with the message:

error: patch failed: src/http/ngx_http_upstream.h:1
error: src/http/ngx_http_upstream.h: patch does not apply
error: patch failed: src/http/ngx_http_upstream_check_module.c:98
error: src/http/ngx_http_upstream_check_module.c: patch does not apply

Language Barrier

Hello!

Let me start by saying I really like this module. Whenever I pull a new module into my production stack I often look at project activity as well as open/closed bugs. Unfortunately, I have a hard time evaluating module stability because I am unable to read/understand some of the bugs. I think they are written in Chinese, but I'm not sure.

For example, issue #19. I'm not sure if I should be concerned about this scenario because I can not understand the thread. Is there any way to provide an issue summary in English if the issue is critical? I know it's annoying to repeat yourself, but it would be much appreciated. Thank You.

Losing the upstream configuration during command error

Whenever, I am sending some invalid configuration. It wipes out the existing configuration. It works fine when I re correct my configuration command. But, would be nice, if it could first verify the command syntaxes before wiping the change.

How to reproduce -
Send some wrong upstream configuration command and then check the upstream configuration.

test_1:~/dyups$ !2217
curl -d "server 10.0.1.13; server 10.0.1.15;check iserver 10.0.1.14;interval=3000 rise=2 fall=5 timeout=1000 type=http;keepalive 20;" 127.0.0.1:8081/upstream/myupstream
commands error

I have a question about it

Hi man,
what is the meaning of "restore upstream configuration in init process handler" in your document . Could you tell me more detailed
Thx!

关于删除upstream的疑问

如果发送删除命令到rest api接口,最后走到的函数是
static ngx_int_t
ngx_dyups_do_delete(ngx_str_t *name, ngx_str_t *rv)
{
ngx_int_t dumy;
ngx_http_dyups_srv_conf_t *duscf;

duscf = ngx_dyups_find_upstream(name, &dumy);

if (duscf == NULL || duscf->deleted) {

    ngx_log_error(NGX_LOG_DEBUG, ngx_cycle->log, 0,
                  "[dyups] not find upstream %V %p", name, duscf);

    ngx_str_set(rv, "not found uptream");
    return NGX_HTTP_NOT_FOUND;
}

ngx_dyups_mark_upstream_delete(duscf);

ngx_str_set(rv, "success");

return NGX_HTTP_OK;

}

可以在ngx_dyups_mark_upstream_delete函数中,仅仅把 duscf->deleted = NGX_DYUPS_DELETING;并没有把内存真正的清空,除非再次调用到 ngx_dyups_find_upstream函数才可能把内存情况,事实就是我理解的这样吗??
如果真是这样的话,删除的话仅仅是把server状态设置为down了,我理解的对吗?情况内存还的靠下一次调用rest api是再次执行到ngx_dyups_find_upstream函数?

dyups process start after abnormal exits

RHEL7.2 + nginx 1.9.11 + 编译dyups模块,在启动nginx服务后,在error日志里会出现报错:
2016/03/02 12:58:52 [alert] 8412#8412: worker process 10652 exited on signal 11 (core dumped)
2016/03/02 12:58:52 [alert] 8412#8412: shared memory zone "ngx_http_dyups_module#1" was locked by 10652
2016/03/02 12:58:52 [alert] 10654#10654: [dyups] process start after abnormal exits

相同的编译参数在rhel6+nginx 1.4.4 + 编译dyups模块,是可以用的

您好,请教lua中添加多个server问题

status, rv = dyups.delete("backend_aether")
if status ~= 200 then
ngx.print(status, rv)
return
end
ngx.print("delete success")
local status, rv = dyups.update("backend_aether", [[server 10.8.210.9:8880;server10.8.210.14:8880;]]);
(这里status =200)
...
注:backend_aether为一个upstream的名字
请问下 要添加多个server 是这么使用的吗? 但结果貌似没有生效。

您好,请教两个问题

1.ngx_http_dyups_module.c的604和607行为何unlock之后又马上lock,这两行是不是多余的?
2.ngx_http_dyups_read_msg函数最后几行这样写会不会更高效些:
if (!ngx_shmtx_trylock(&shpool->mutex)){
return;
}
ngx_http_dyups_read_msg_locked(ev);
ngx_shmtx_unlock(&shpool->mutex);

Combining upstream and domain conflicts with proxy_pass

We use domain names as upstream names for late binding, like this:

set $where myapp.com;
proxy_pass http://$where;

Upstream myapp.com is created with dyups and can be absent at config parse time.

Now you can also have proxy_pass http://myapp.com somewhere in your configs. This is fine (or not) until you reload nginx. When you do reload, upstream myapp.com disappears from upstream list, even though it is present in config files. Restart makes upstream reappear again.

I'm not sure how to approach this issue other than avoid using dynamic upstream names directly.

Compile error or warning

./ngx_http_dyups_module/ngx_http_dyups_module.c:1084:41: error:‘rv.len’ may be used uninitialized in this function [-Werror=maybe-uninitialized]

= =然后还有个问题,咋关掉ngx lua的支持。。。

与keepalive模块异常coredump

hi
一个coredump,没能想明白出现的原因,看看您这边能有什么思路吗?

(gdb) bt
#0 0x000000000048f501 in ngx_http_upstream_get_keepalive_peer (data=0xa01faf0, pc=0x9d093d0) at src/http/modules/ngx_http_upstream_keepalive_module.c:240
#1 ngx_http_upstream_get_keepalive_peer (pc=0x9d093d0, data=0xa01faf0) at src/http/modules/ngx_http_upstream_keepalive_module.c:209
#2 0x000000000043e1ec in ngx_event_connect_peer (pc=0x9d093d0) at src/event/ngx_event_connect.c:25
#3 0x000000000046f163 in ngx_http_upstream_connect (r=0xc4c7b00, u=0x9d093c0) at src/http/ngx_http_upstream.c:1178
#4 0x000000000046de30 in ngx_http_upstream_process_header (r=0xc4c7b00, u=0x9d093c0) at src/http/ngx_http_upstream.c:1572
#5 0x000000000046b5ad in ngx_http_upstream_handler (ev=0x7f7e3131d0f8) at src/http/ngx_http_upstream.c:988
#6 0x000000000043cda5 in ngx_event_expire_timers () at src/event/ngx_event_timer.c:149
#7 0x000000000043c99d in ngx_process_events_and_timers (cycle=0x2784790) at src/event/ngx_event.c:265
#8 0x0000000000443c99 in ngx_worker_process_cycle (cycle=0x2784790, data=) at src/os/unix/ngx_process_cycle.c:807
#9 0x0000000000442695 in ngx_spawn_process (cycle=0x2784790, proc=0x443be0 <ngx_worker_process_cycle>, data=0xa, name=0xab97d6 "worker process", respawn=10)

at src/os/unix/ngx_process.c:198

#10 0x0000000000445031 in ngx_reap_children (cycle=0x2784790) at src/os/unix/ngx_process_cycle.c:619
#11 ngx_master_process_cycle (cycle=0x2784790) at src/os/unix/ngx_process_cycle.c:180
#12 0x0000000000424e51 in main (argc=, argv=) at src/core/nginx.c:412

(gdb) f 0
#0 0x000000000048f501 in ngx_http_upstream_get_keepalive_peer (data=0xa01faf0, pc=0x9d093d0) at src/http/modules/ngx_http_upstream_keepalive_module.c:240

240 in src/http/modules/ngx_http_upstream_keepalive_module.c
(gdb) info locals
q = 0x0
c =
item = 0xfffffffffffffff8
cache = 0x28e77d0

(gdb) p *cache
$40 = {prev = 0x0, next = 0x0}

可以看到是在keepalive模块寻找cache时出现问题,似乎是没有调用ngx_http_upstream_init_keepalive进行 ngx_queue_init(&kcf->cache);
查询对应upstream结构体内容没有看出异常

peer = {
init_upstream = 0x48f0e0 < ngx_http_upstream_init_keepalive >, init = 0x4daaa0 < ngx_http_dyups_init_peer >, data = 0x292b210}, srv_conf = 0x28e76e0,
servers = 0x28883f8, flags = 63, host = {len = 14, data = 0x2888308 "www.******.net"}, file_name = 0xaeffc5 "dynamic_upstream", line = 0, port = 80,
default_port = 0, no_port = 0}

error日志中对应期间前面几次尝试删除都因为引用计数导致没有实际的删除,但是22分突然delete upstream和add upstream
2014/08/05 12:22:30 [error] 49777#0: [dyups] deleted upstream
2014/08/05 12:22:30 [error] 49777#0: [dyups] add new upstream
2014/08/05 12:23:03 [error] 49777#0: *254101116 upstream timed out (110: Connection timed out) while reading response header from upstream, client:
2014/08/05 12:23:08 [alert] 27108#0: worker process 49777 exited on signal 11 (core dumped)

我去观察r->upstream_status,发现这个coredump对应是第三次连接后端了,前面两次都是个人设置的read_timeout 10分钟超时。
这个coredump对应时间是12:23,那么从12:03到此时之间应该不能删除upstream添加新的的吧。
不知道跟这里有没有关系。想不出是哪里出现了错误

(gdb) p *r->upstream_states
$7 = {elts = 0xa01fd90, nelts = 3, size = 56, nalloc = 4, pool = 0xc4c7ab0}
(gdb) p *(ngx_http_upstream_state_t *)r->upstream_states.elts
$8 = {bl_time = 0, bl_state = 0, status = 504, response_sec = 600, response_msec = 57, response_length = 0, peer = 0x292b248}
(gdb) p *((ngx_http_upstream_state_t *)r->upstream_states.elts)->peer
$9 = {len = 17, data = 0x28e7898 ""}
(gdb) p (((ngx_http_upstream_state_t *)r->upstream_states.elts)[1])
$11 = {bl_time = 0, bl_state = 0, status = 504, response_sec = 600, response_msec = 58, response_length = 0, peer = 0x292b2b8}
(gdb) p (((ngx_http_upstream_state_t *)r->upstream_states.elts)[2])
$13 = {bl_time = 0, bl_state = 0, status = 0, response_sec = 1407212583, response_msec = 691, response_length = 0, peer = 0x0}

使用dyups后的nginx请求处理问题

你好,我有个问题,在配置了dyups的情况下,我通过restful api将某台server下线,在下线的同时,恰好有个请求发送到此台server上,当server处理完请求准备响应时,此时upstream内已无此台server,那么这个响应该如何处理?

您好,请教一下关于ngx_dyups_sandbox_update的问题

在update配置的时候,我看到这函数内做了个沙盒更新,添加了一个_dyups_upstream_sandbox_的upstream项,但是马上又mark deleted了,紧接着后面调用ngx_dyups_find_upstream函数又会把它的内存池给释放掉了,那么这sandbox update的意义何在呢?

Dynamic change not working

Hi,
I change the servers from dyups but the request always goes to the ones defined in upstream.conf, they are not updated dynamically as the module stated, I have the folowing

 server {
listen 9090 default_server;

     location / {
       proxy_pass http://balancer;
     }
}

 server {
 listen 9091 default_server;
    location / {
        return 200 "first";
    }
 }
 server {
    listen 9092 default_server;
    location / {
          return 200 "second";
     }
   }

my upstream.conf

upstream balancer {
    server 127.0.0.1:9091;
    server 127.0.0.1:9092;
}

$curl http://dyups/upstream/balancer
server 127.0.0.1:9091
server 127.0.0.1:9092

$ curl -d 'server 127.0.0.1:9092;' http://dyups/upstream/balancer
success

$ curl http://dyups/upstream/balancer
server 127.0.0.1:9092

even so

$ curl http://localhost:9090
 first
 $ curl http://localhost:9090
 first
 $ curl http://localhost:9090
 second
 $ curl http://localhost:9090
first

so the change is not taken into account

Completely not work on openresty 1.9.7.4

After I upgrade openresty to 1.9.7.4, this module will be completely not work.
The web api is still fine and can return the correct message, and if I use lua-upstream-nginx-module to get the upstream info, it seems all fine. But actually, when a request came into my server, nginx will still send it to old upstream, not the new one.

[在线等]ngx_http_dyups_module Added add/del upstream 疑问!

嗨~
我想动态添加upstream,或者删除 等,在删除和添加前先修改upstream 数据,如下:

server {
            listen 8081;
            location / {
            content_by_lua '
                ngx.log(ngx.ERR, "here!!!************")
                    local dyups = require "ngx.dyups"
                ngx.print("here!!!")
                    local status, rv = dyups.update("test", [[server 127.0.0.1:8088;]]);
                ngx.print(status, rv)
                    if status ~= ngx.HTTP_OK then
                        ngx.print(status, rv)
                        return
                    end
                    ngx.print("update success")

                    status, rv = dyups.delete("test")
                    if status ~= ngx.HTTP_OK then
                        ngx.print(status, rv)
                        return
                    end
                    ngx.print("delete success")
            ';
                    dyups_interface;  //upstream 接口
            }
        }

我采用的是openresty
执行报如下错误:

==> error.log <==
2016/05/09 15:26:13 [error] 12279#0: *1 [lua] content_by_lua(nginx.conf:142):2: here!!!************, client: 127.0.0.1, server: , request: "POST /upstream/dyhost HTTP/1.1", host: "127.0.0.1:8081"
2016/05/09 15:26:13 [error] 12279#0: *1 lua entry thread aborted: runtime error: content_by_lua(nginx.conf:142):3: module 'ngx.dyups' not found:
    no field package.preload['ngx.dyups']
    no file '../lib/ngx/dyups.lua'
    no file '/usr/local/openresty/lualib/ngx/dyups.lua'
    no file '/usr/local/openresty/lualib/ngx/dyups/init.lua'
    no file './ngx/dyups.lua'
    no file '/usr/local/openresty/luajit/share/luajit-2.1.0-beta1/ngx/dyups.lua'
    no file '/usr/local/share/lua/5.1/ngx/dyups.lua'
    no file '/usr/local/share/lua/5.1/ngx/dyups/init.lua'
    no file '/usr/local/openresty/luajit/share/lua/5.1/ngx/dyups.lua'
    no file '/usr/local/openresty/luajit/share/lua/5.1/ngx/dyups/init.lua'
    no file '/usr/local/openresty/lualib/ngx/dyups.so'
    no file './ngx/dyups.so'
    no file '/usr/local/lib/lua/5.1/ngx/dyups.so'
    no file '/usr/local/openresty/luajit/lib/lua/5.1/ngx/dyups.so'
    no file '/usr/local/lib/lua/5.1/loadall.so'
    no file '/usr/local/openresty/lualib/ngx.so'
    no file './ngx.so'
    no file '/usr/local/lib/lua/5.1/ngx.so'
    no file '/usr/local/openresty/luajit/lib/lua/5.1/ngx.so'
    no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
coroutine 0:
    [C]: in function 'require'
    content_by_lua(nginx.conf:142):3: in function <content_by_lua(nginx.conf:142):1>, client: 127.0.0.1, server: , request: "POST /upstream/dyhost HTTP/1.1", host: "127.0.0.1:8081"

health check doesn't work after updating upstream

config/nginx.config file:

http {
    //...
    #add runtime upstream reconfiguration
    include upstream.conf;
    dyups_upstream_conf  upstream.conf;

    server {
        listen 8090;
        location / {
            dyups_interface;
        }
    }

server {
        listen       80;
        server_name  localhost;
        location / {
            proxy_pass http://cluster;
        }
        //...
    }
    //...
}

config/upstream.config file:

upstream cluster {
    server 127.0.0.1:8888;
    server 127.0.0.1:8889;
    check interval=3000 rise=2 fall=5 timeout=1000 type=http;
    check_http_send "GET / HTTP/1.0\r\n\r\n";
    check_http_expect_alive http_2xx http_3xx;
}

After running the command curl -d "server 127.0.0.1:8088;server 127.0.0.1:8089;" 127.0.0.1:8090/upstream/cluster, check health function stops working - no requests received by my test clusters.

Keepalive is out of work when update upstream server

when i update the upstream server, the tcp TIME_WAIT will be a sharp rising.
I find the source code that ngx_http_dyups_init_peer is instead of ngx_http_upstream_init_keepalive_peer. but i don't know much about C source code, and how to fix this problem.

can't try your module with lua

Hi,
I made this install, with a Dockerfile :

FROM ubuntu:trusty
MAINTAINER Yves-Marie Saout « [email protected] »

# install LuaJIT
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install openssl libssl-dev zlib1g zlib1g-dev git curl tar make build-essential debhelper autoconf automake patch dpkg-dev fakeroot pbuilder vim && \
    cd /usr/local/src && \
    curl -O http://luajit.org/download/LuaJIT-2.0.3.tar.gz && \
    tar xf LuaJIT-2.0.3.tar.gz && \
    cd LuaJIT-2.0.3 && \
    make && \
    make PREFIX=/usr/local/luajit install

# install nginx with lua-nginx-module
    RUN useradd --no-create-home nginx
    RUN export LUAJIT_LIB=/usr/local/luajit/lib && \
    export LUAJIT_INC=/usr/local/luajit/include/luajit-2.0 && \
    cd /usr/local/src && \
    git clone git://github.com/simpl/ngx_devel_kit.git && \
    git clone git://github.com/chaoslawful/lua-nginx-module.git && \
    git clone git://github.com/yzprofile/ngx_http_dyups_module.git && \
    curl -LO http://downloads.sourceforge.net/project/pcre/pcre/8.35/pcre-8.35.tar.bz2 && \
    tar xf pcre-8.35.tar.bz2 && \
    git clone git://github.com/alibaba/tengine.git && \
    cd tengine && \
    wget https://raw.githubusercontent.com/yzprofile/ngx_http_dyups_module/master/upstream_check-tengine-2.0.3.patch && \
    git apply upstream_check-tengine-2.0.3.patch && \
    ./configure \
        --prefix=/etc/nginx \
        --sbin-path=/usr/sbin/nginx \
        --conf-path=/etc/nginx/nginx.conf \
        --error-log-path=/var/log/nginx/error.log \
        --http-log-path=/var/log/nginx/access.log \
        --pid-path=/var/run/nginx.pid \
        --lock-path=/var/run/nginx.lock \
        --group=nginx \
        --with-http_ssl_module \
        --with-http_realip_module \
        --with-http_addition_module \
        --with-http_sub_module \
        --with-http_dav_module \
        --with-http_flv_module \
        --with-http_mp4_module \
        --with-http_gunzip_module \
        --with-http_gzip_static_module \
        --with-http_random_index_module \
        --with-http_secure_link_module \
        --with-http_stub_status_module \
        --with-mail \
        --with-mail_ssl_module \
        --with-file-aio \
        --with-ipv6 \
        --with-pcre=/usr/local/src/pcre-8.35 \
        --add-module=/usr/local/src/ngx_devel_kit \
        --add-module=/usr/local/src/lua-nginx-module \
        --add-module=/usr/local/src/ngx_http_dyups_module \
        --with-ld-opt="-Wl,-rpath,$LUAJIT_LIB" && \
    make && \
    make install

ADD nginx.conf /etc/nginx/nginx.conf

EXPOSE 80
CMD ["/usr/sbin/nginx", "-g", "daemon off;"]

My nignx.conf :

worker_processes  1;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  text/html;
    sendfile        on;
    keepalive_timeout  65;
    lua_package_path "/etc/nginx/lib/?.lua;;";
    server {
        listen       80;
        index  index.html index.htm;
        location / {
            root   html;
            index  index.html index.htm;
        }
        location /lua_test {
            default_type 'text/plain';
        content_by_lua '
                local dyups = require "ngx.dyups"
                local status, rv = dyups.update("test", [[server 127.0.0.1:8088;]]);
                ngx.print(status, rv)
                if status ~= 200 then
                        ngx.print(status, rv)
                        return
                end
                ngx.print("update success")
                status, rv = dyups.delete("test")
                if status ~= 200 then
                ngx.print(status, rv)
                return
                end
                ngx.print("delete success")
                ';}

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

And I have this error in error.log when I curl /lua_test :

stack traceback:
coroutine 0:
    [C]: in function 'require'
    content_by_lua:2: in function <content_by_lua:1>, client: 82.227.61.98, server: , request: "GET /lua_test HTTP/1.1", host: "myserver.tld"
2014/09/07 14:37:00 [error] 7#0: *6 lua entry thread aborted: runtime error: content_by_lua:2: module 'ngx.dyups' not found:
    no field package.preload['ngx.dyups']
    no file '/etc/nginx/lib/ngx/dyups.lua'
    no file './ngx/dyups.lua'
    no file '/usr/local/share/luajit-2.0.3/ngx/dyups.lua'
    no file '/usr/local/share/lua/5.1/ngx/dyups.lua'
    no file '/usr/local/share/lua/5.1/ngx/dyups/init.lua'
    no file './ngx/dyups.so'
    no file '/usr/local/lib/lua/5.1/ngx/dyups.so'
    no file '/usr/local/lib/lua/5.1/loadall.so'
    no file './ngx.so'
    no file '/usr/local/lib/lua/5.1/ngx.so'
    no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
coroutine 0:
    [C]: in function 'require'
    content_by_lua:2: in function <content_by_lua:1>, client: 82.227.61.98, server: , request: "HEAD /lua_test HTTP/1.1", host: "myserver.tld", referrer: "http://myserver.tld/lua_test"

引用计数的疑问?

hi

关于count引用计数有个疑问?

在ngx_http_upstream_next中只有当if (u->peer.sockaddr) 时才会调用 u->peer.free(&u->peer, u->peer.data, state);

在ngx_http_dyups_get_peer执行了 ctx->scf->count++;后,调用 return ctx->get(pc, ctx->data);的过程中,如果u->peer.sockaddr没有被成功的赋值,那么是不是count计数就会出现偏差。

比如A请求执行过程中,回源失败,第二个proxy_next_upstream尝试get_peer。此时B请求到来,回源全部503失败,导致了no live upstream.那么A请求从ngx_http_upstream_get_round_robin_peer函数返回NGX_BUSY。而在ngx_http_upstream_next中不会调用peer.free .此时就会出现count值不一致了。

因此可能我认为count值增加应该在ctx->get(pc, ctx->data);函数成功的拿到了peer.sockaddr时再执行!

模块没有产生实际效果!!!!!!

dear :
当我后端添加两个节点,然后在两个节点添加不同的静态文件,表示区别,配置完毕查看状态都是ok的! 然后我就开始测试,当我踢掉其中一台,通过统计只有一台,但是我访问,一样可以访问到两个不同的页面,也就是说这个模块没有生效,这个问题有测试过吗?
下面是我测试过程以及配置文件!!!!

[root@QA-PUB01 test]# curl 127.0.0.1:8888/detail
rundeck
server 192.168.2.200:80
server 192.168.12.186:8080

[root@QA-PUB01 test]# curl 192.168.12.186:8080/status.jsp
Running1111

[root@QA-PUB01 test]# curl 192.168.2.200:80/status.jsp
Running

[root@QA-PUB01 test]# curl -d "server 192.168.2.200:80;" 127.0.0.1:8888/upstream/rundeck
success[root@QA-PUB01 test]127.0.0.1:8888/detail
rundeck
server 192.168.2.200:80

[root@QA-PUB01 test]# curl 192.168.12.186/status.jsp
Running

<!-- qa-pub01 Wed, 04 May 2016 09:07:08 GMT -->
[root@QA-PUB01 test]# curl 192.168.12.186/status.jsp
Running1111

<!-- qa-pub01 Wed, 04 May 2016 09:07:09 GMT -->
[root@QA-PUB01 test]# curl -d "server 192.168.12.186:8080;" 127.0.0.1:8888/upstream/rundeck
success[root@QA-PUB01 test]127.0.0.1:8888/detail
rundeck
server 192.168.12.186:8080

[root@QA-PUB01 test]# curl 192.168.12.186/status.jsp
**Running**

<!-- qa-pub01 Wed, 04 May 2016 09:08:02 GMT -->
[root@QA-PUB01 test]# curl 192.168.12.186/status.jsp
**_Running1111_**

<!-- qa-pub01 Wed, 04 May 2016 09:08:03 GMT -->

__________________________________________________________--
upstream rundeck {
        #consistent_hash $request_uri;
        server 192.168.2.200:80;
        server 192.168.12.186:8080;
        #session_sticky;

        #check interval=3000 rise=3 fall=2 timeout=2000 type=http; 
        #check_http_send "GET /status.jsp HTTP/1.0\r\n\r\n"; 
        #check_http_expect_alive http_2xx http_3xx; 
}
________________________________________________________-
server {
        listen  8888;
        server_name     127.0.0.1;
        location / {
                 dyups_interface;
        }
}__________________________________________________________

您好,请教下安装时有报错

adding module in ./ngx_http_dyups_module
: command not founddule/config: line 6:
'/ngx_http_dyups_module/config: line 7: syntax error near unexpected token { '/ngx_http_dyups_module/config: line 7:dyups_lua() {

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.