go-micro / go-micro Goto Github PK
View Code? Open in Web Editor NEWA Go microservices framework
Home Page: https://go-micro.dev
License: Apache License 2.0
A Go microservices framework
Home Page: https://go-micro.dev
License: Apache License 2.0
I'm playing with the examples at the moment.
If i run the pubsub example with the consul(default) backend everything works as expected.
~/go-micro/examples/pubsub (git)-[master] % go run main.go
2016/03/22 17:59:21 Broker Listening on [::]:34344
[sub] received message: 0: 2016-03-22 17:59:22.244760259 +0100 CET header map[id:0]
[pub] pubbed message: 0: 2016-03-22 17:59:22.244760259 +0100 CET
[sub] received message: 1: 2016-03-22 17:59:23.244835525 +0100 CET header map[id:1]
[pub] pubbed message: 1: 2016-03-22 17:59:23.244835525 +0100 CET
[sub] received message: 2: 2016-03-22 17:59:24.244759704 +0100 CET header map[id:2]
[pub] pubbed message: 2: 2016-03-22 17:59:24.244759704 +0100 CET
[sub] received message: 3: 2016-03-22 17:59:25.244865275 +0100 CET header map[id:3]
[pub] pubbed message: 3: 2016-03-22 17:59:25.244865275 +0100 CET
[sub] received message: 4: 2016-03-22 17:59:26.24476599 +0100 CET header map[id:4]
but if i import _ "github.com/micro/go-plugins/registry/etcd" and try to run the example
the process panics with:
~/go-micro/examples/pubsub (git)-[master] % go run main.go --registry etcd --registry_address http://127.0.0.1:4001
panic: http: multiple registrations for /_sub
goroutine 1 [running]:
panic(0x9d9b40, 0xc82014a080)
/usr/lib/go/src/runtime/panic.go:464 +0x3e6
net/http.(*ServeMux).Handle(0xc820012ba0, 0xc548e0, 0x5, 0x7f97413972a0, 0xc8200cc160)
/usr/lib/go/src/net/http/server.go:1926 +0x297
net/http.Handle(0xc548e0, 0x5, 0x7f97413972a0, 0xc8200cc160)
/usr/lib/go/src/net/http/server.go:1961 +0x4b
github.com/micro/go-micro/broker.(*httpBroker).Init(0xc8200cc160, 0x0, 0x0, 0x0, 0x0, 0x0)
/home/rkaufmann/.go/src/github.com/micro/go-micro/broker/http_broker.go:321 +0x30f
github.com/micro/go-micro/broker.Init(0x0, 0x0, 0x0, 0x0, 0x0)
/home/rkaufmann/.go/src/github.com/micro/go-micro/broker/broker.go:49 +0x55
main.main()
/home/rkaufmann/go-micro/examples/pubsub/main.go:49 +0x5b
exit status 2
am i doing something wrong ?
Mock every interface so we can write tests
Unmarshalling the following panics because the result key is missing. I think it should gracefully handle all sorts of responses (also invalid or incomplete ones) from a service.
{"id":0,"error":null}
https://github.com/micro/go-micro/blob/dddcdc3/codec/jsonrpc/client.go#L92
From glog requires flag.Parse() to use his flags, micro dont use flag.Parse() so when excuting the micro-based code, it will show the Error in log. I think we should change log system or put a flag.Parse() into cmd.Init() to bypassing these errors , what do you think ?
When trying to build a service, the build process fails with the following error. This appears to be from the latest commit.
github.com/myodc/go-micro/registry/kubernetes
../../../myodc/go-micro/registry/kubernetes/kubernetes.go:41: not enough arguments in call to c.client.Services(c.namespace).List
../../../myodc/go-micro/registry/kubernetes/kubernetes.go:78: not enough arguments in call to c.client.Services(c.namespace).List
../../../myodc/go-micro/registry/kubernetes/watcher.go:69: not enough arguments in call to svi.List
I used docker to start a micro service, and started the docker-consul container, can not modify the registered address of consul.
FROM ubuntu:latest
RUN mkdir /sms
WORKDIR /sms
ADD . /sms
CMD ["./sms"]
sms:
build: .
environment:
- MICRO_REGISTRY_ADDRESS=http://consul:8501
volumes:
- ./logs/:/sms/logs/
- ./etc/env_strings.conf:/etc/env_strings.conf
links:
- consul
consul:
image: progrium/consul
Attaching to sms_consul_1, sms_sms_1
consul_1 | ==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
consul_1 | ==> Starting Consul agent...
consul_1 | ==> Starting Consul agent RPC...
consul_1 | ==> Consul agent running!
consul_1 | Node name: '70f340bc6eec'
consul_1 | Datacenter: 'dc1'
consul_1 | Server: false (bootstrap: false)
consul_1 | Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400)
consul_1 | Cluster Addr: 192.168.42.2 (LAN: 8301, WAN: 8302)
consul_1 | Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
consul_1 | Atlas: <disabled>
consul_1 |
consul_1 | ==> Log data will now stream in as it occurs:
consul_1 |
consul_1 | 2016/06/23 12:50:45 [INFO] serf: EventMemberJoin: 70f340bc6eec 192.168.42.2
consul_1 | 2016/06/23 12:50:45 [WARN] serf: Failed to re-join any previously known node
consul_1 | 2016/06/23 12:50:45 [ERR] agent: failed to sync remote state: No known Consul servers
sms_1 | 2016/06/23 12:50:45 Listening on [::]:50573
sms_1 | 2016/06/23 12:50:45 Broker Listening on [::]:34602
sms_1 | 2016/06/23 12:50:45 Registering node: service.sms-F05AE48C
sms_1 | 2016/06/23 12:50:45 consul.watch: Watch (type: services) errored: Get http://127.0.0.1:8500/v1/catalog/services: dial tcp 127.0.0.1:8500: getsockopt: connection refused, retry in 5s
sms_1 | time="2016-06-23T12:50:45Z" level=error msg="Put http://127.0.0.1:8500/v1/agent/service/register: dial tcp 127.0.0.1:8500: getsockopt: connection refused"
sms_sms_1 exited with code 0
How should I solve this problem? thx :)
What do You think about caching service nodes on the client side and watching them when possible ?
It is possible when using consul or etcd, I don't know about kubernetes.
Or maybe there is already a mechanism for that in Consul client for example but I am not aware ?
What do You think about moving registry flag to registry package?
I think it's how it should be done because not only on server side it's important but also on the client side.
Registry package proved to be very useful, Im using it not only to find RPC services but also databases etc. even when not using server
package. I think it would be more convenient to have this flag in registry package and then initialize DefaultRegistry
using registry.Init()
.
Use the context inside client/server to signal when a request via client.Call or client.Stream is to be cancelled.
If we use transport.Secure(true)
, and did not settls.config
, then the transport will automatic generate self-signed cert, but the cert is not trusted, we could use InsecureSkipVerify
for ignore secure issue while we make a client request, but some tools could not skip it like tool of ab
, and if we want for support http2
, the tools nghttp
,h2load
would not support insecure cert too.
I use these tools to have a load test, it will directly to request to a service node as following:
go run main.go
2016/06/06 11:12:06 Listening on [::]:54897
2016/06/06 11:12:06 Broker Listening on [::]:54898
postdata
{"method":"Say.Hello","params":[{"name":"world"}],"id":0}
ab -c 100 -n 1000000 -H 'Content-Type: application/json' -p postdata https://127.0.0.1:54897/`
I generated a private CA Cert, and let system trusted
, so while we use transport.Secure(true)
, set transport.CACert(certFilepathFromENV, keyFilepathFromENV)
, it will generate cert and issued by this CA
, so the address of https://127.0.0.1:54897/
we be trusted, if we do not give CA Cert, then it will generate self-signed
cert.
while I to do this work, I coded a tls lab github.com/gogap/misc/lib/tls
it will work as following:
import gogapTLS "github.com/gogap/misc/lib/tls"
if p.opts.Secure {
if tlsConfig == nil {
var hosts []string
if h, _, e := net.SplitHostPort(addr); e == nil {
if h == "" {
hosts = getIPAddrList()
} else {
hosts = append(hosts, addr)
}
}
caConf := getCAConfig(p.opts.Context)
var err error
if cert, key, err = gogapTLS.GenerateCertificate(
gogapTLS.Host(hosts...),
gogapTLS.CAFile(caConf.cert, caConf.key),
); err != nil {
return nil, err
}
var cer tls.Certificate
if cer, err = tls.X509KeyPair(cert, key); err != nil {
return nil, err
}
tlsConfig = &tls.Config{Certificates: []tls.Certificate{cer}}
}
fn := func(addr string) (net.Listener, error) {
return tls.Listen("tcp", addr, tlsConfig)
}
l, err = listen(addr, fn)
} else {
fn := func(addr string) (net.Listener, error) {
return net.Listen("tcp", addr)
}
l, err = listen(addr, fn)
}
if i run a service and stop it:
130 rkaufmann@rkaufmann ~/micro/go/src/github.com/testService % GOPATH=/home/rkaufmann/micro/go go run main.go --registry=etcdv3 :(
2016/05/26 12:17:58 Listening on [::]:35325
2016/05/26 12:17:58 Broker Listening on [::]:44141
2016/05/26 12:17:58 Registering node: go.micro.srv.testService-1e97b357-232b-11e6-8b9d-c0cb380679de
^C2016/05/26 12:18:05 Deregistering node: go.micro.srv.testService-1e97b357-232b-11e6-8b9d-c0cb380679de
2016/05/26 12:18:06 Unsubscribing from topic: topic.go.micro.srv.testService
2016/05/26 12:18:06 Unsubscribing from topic: topic.go.micro.srv.testService
i see messages like
2016/05/26 12:15:39 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport: write tcp 127.0.0.1:2379->127.0.0.1:52332: use of closed network connection"
2016/05/26 12:16:12 grpc: Server.processUnaryRPC failed to write status: stream error: code = 1 desc = "context canceled"
in etcd.
Beacause of this etcd gets filled with dead nodes over time.
/micro-registry/topic:topic.go.micro.srv.testService/broker-4a52d6aa-232a-11e6-a6fc-c0cb380679de.4a5873ba-232a-11e6-a6fc-c0cb380679de
{"name":"topic:topic.go.micro.srv.testService","version":"ff.http.broadcast","metadata":null,"endpoints":null,"nodes":[{"id":"broker-4a52d6aa-232a-11e6-a6fc-c0cb380679de.4a5873ba-232a-11e6-a6fc-c0cb380679de","address":"192.168.0.203","port":42250,"metadata":{"secure":"false"}}]}
after increasing the delay to 100 Milliseconds everything works as expected.
This commit breaks the consul registration and makes services panic right after startup.
Details:
We are using latest version of consul running as 3 node cluster on K8s. The consul cluster is exposed as a K8S_SERVICE to our micro services. The error we get is:
Unexpected response code: 500 (CheckID does not have associated TTL)
Currently the logic for the consul registry creates a TTL health check and re-registers the service every N seconds.
The drawback of the current approach is that this behaviour constantly changes the service index on the consul side interfering with long-polling. As the number of service instances and services grows, this renders long-polling for changes useless and in turn prevents us from doing a proper client side caching.
We suggest a slight improvement in the registration logic which leverages the Consul idempotent health check API:
when we write some code of business related handlers, if we return error, the micro
's default strategy in selector
will retry and the node will be put into blacklist, so how could we return some business errors as following.
return errors.New("user name could empty")
If we put business errors into context
or Response
proto, is this a good idea ?
We have a Debug endpoint on every service which exposes the method Debug.Health. This will in future include Debug.Vars for env vars and Debug.Stats or something of that nature. We should also support other endpoints internally such as a Debug.Service or Service.Registered which indicates to us what the internal representation of the service is that was registered with the registry. Service nodes in some way are stand alone. At some point we actually might want to scrape nodes directly for internal information or stats. Which highlights the point all platform packages should be internal endpoints. Monitor.HealthChecks, Trace.Spans, Config.Loaded, Auth.Config, Metrics.Gauges, etc.
Additionally. Internal handlers should be registered in a different server so that there's no overlap with externally visible handlers. We should also expect an explicit header in a call to state that the request is to an internal handler. Optionally there could be the use of auth policies to prevent access. Also should not be allowed to access internal handlers from the API or Web UI.
We might not want to expose certain endpoints to the registry. They might be System, Debug or Gossip endpoints for "internal" use. Add opts when doing NewHandler or Handle on the server. With the option "Internal".
The endpoint extraction code https://github.com/micro/go-micro/blob/master/server/rpc_handler.go#L15L34
https://github.com/micro/go-micro/blob/master/server/rpc_server.go#L118
https://github.com/micro/go-micro/blob/master/server/rpc_server.go#L121
h := server.NewHandler(new(Gossip))
server.Handle(h, server.Internal(true))
or
h := server.NewHandler(new(Gossip), server.Internal(true))
server.Handle(h)
Running the example throws a type error as google has changed the directory structure for both kuber client and the paths. I have a fork which runs the example server https://github.com/kynrai/go-micro/commit/520aeded8dc50b3d827ad1ca23556bd2c3e05c9c
It would be nice to use smarter defaults - for example 8500 when using consul as the registry.
The blacklist is causing people some pain. Its blanket blacklisting across errors is not good enough. Blacklisting needs to be context specific. This makes life a bit difficult. It's best to turn this off by default but also provide an easier way to add features like this.
Proposal. Create something like a rule (maybe think of a better name) that basically allows the core of the selector to be chained.
The interface
type Rule interface {
Filter(services []*registry.Service) []*registry.Service {
Mark(service string, node *registry.Node, err error)
Reset(service string)
}
The internals
Select ...
for _, r := range s.opts.Rules {
services = r.Filter(services)
}
Mark ...
for _, r := range s.opts.Rules {
r.Mark(service, node, err)
}
Reset ...
for _, r := range s.opts.Rules {
r.Reset(service)
}
The provide a way to add this as an options
func WithRule(r ...Rule) Option {
return func(o *Options) {
o.Rules = append(o.Rules, r...)
}
}
Usage
selector.NewSelector(selector.WithRule(blacklist))
or
selector.Init(selector.WithRule(blacklist))
What to do if some variables, structs need to be initialized before any calls are made ?
They shouldn't be initialized in the handler function, for optimization purposes.
I'm not finding a way to do it, except for using the golang init function
Any other ideas ?
package handler
import (
"log"
"golang.org/x/net/context"
)
type Handler struct {}
var test = "not set"
func init() {
test = "initialized"
}
func (h *Handler) Work(ctx context.Context, req *handler.Request, rsp *handler.Response) error {
log.Printf("Received Handler.Work request: %v", req)
// do something
log.Printf("test=%s", test)
return nil
}
A client call seems to panic when the transport mock is used.
if err := d.client.Call(context.Background(), client.NewRequest("dispatcher", "Notification.Create", request), response); err != nil {
return "", err
}
The stack trace is
panic: close of closed channel
goroutine 21 [running]:
panic(0x9f8260, 0xc820146370)
/opt/go/src/runtime/panic.go:464 +0x3e6
github.com/micro/go-micro/transport/mock.(*mockSocket).Close(0xc82010e7c0, 0x0, 0x0)
/home/philipp/projects/go/src/github.com/micro/go-micro/transport/mock/mock.go:66 +0x31
github.com/micro/go-micro/client.(*rpcPlusCodec).Close(0xc820148030, 0x0, 0x0)
/home/philipp/projects/go/src/github.com/micro/go-micro/client/rpc_codec.go:146 +0x82
github.com/micro/go-micro/client.(*rpcStream).Close(0xc820144150, 0x0, 0x0)
/home/philipp/projects/go/src/github.com/micro/go-micro/client/rpc_stream.go:117 +0x9e
github.com/micro/go-micro/client.(*rpcClient).call(0xc82001a1c0, 0x7fa453c2e830, 0xc8201148a0, 0xc8201128d0, 0x8, 0x7fa453c2e7b8, 0xc820061560, 0xb98dc0, 0xc820114840, 0x0, ...)
/home/philipp/projects/go/src/github.com/micro/go-micro/client/rpc_client.go:110 +0xd88
github.com/micro/go-micro/client.(*rpcClient).Call.func1(0x0, 0x0, 0x0)
/home/philipp/projects/go/src/github.com/micro/go-micro/client/rpc_client.go:255 +0xa82
github.com/micro/go-micro/client.(*rpcClient).Call.func2(0xc820061680, 0xc82007d770, 0xc8201128b0)
/home/philipp/projects/go/src/github.com/micro/go-micro/client/rpc_client.go:265 +0x29
created by github.com/micro/go-micro/client.(*rpcClient).Call
/home/philipp/projects/go/src/github.com/micro/go-micro/client/rpc_client.go:266 +0xbd3
In rare cases the call is successful without a panic. There seem to be some kind of race condition. Some advice is very welcome.
The selector needs to be reworked. Strategies like random and roundrobin should not be individual selectors but instead strategies that can be used to select nodes. There's also the desire to combine blacklisting, weighted priorities and caching. Like the client and server we can implement a single selector that accepts different ways of augmenting the behaviour of a default selector.
Comments welcome.
Add a label selector which prioritises the order which nodes are return based on a list of labels. Should be fairly straightforward initialisation whereby an order list of key-value pairs are provided then examined against node metadata and ordered accordingly.
This is a warning but an issue people continually bring up. Fix this so it no longer occurs. Most likely requiring encoding of tag values using gzip and hex.
Logging within the library is only for critical issues. We don't need glog for this. It causes more pain than necessary. Replace with standard log package. Take note of cmd flags set to make glog work in the first place. We don't want to break existing apps.
A basic stats collector was added to the main binaries. One should be added here in the endpoint Debug.Stats. This can then be called via the CLI or viewed in the dashboard for VERY basic metrics much like Debug.Health.
Hi
I have a client running on 'blackbeard' machine and a server running on 'wopr' machine.
The consul server is running on blackbeard and a consul agent is running on wopr.
This is the consul server
⟩ env GOMAXPROCS=4 consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul
==> WARNING: BootstrapExpect Mode is specified as 1; this is the same as Bootstrap mode.
==> WARNING: Bootstrap mode enabled! Do not enable unless necessary
==> Starting raft data migration...
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Consul agent running!
Node name: 'blackbeard'
Datacenter: 'dc1'
Server: true (bootstrap: true)
Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
Cluster Addr: 192.168.23.23 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>
==> Log data will now stream in as it occurs:
2015/11/04 10:50:08 [INFO] raft: Node at 192.168.23.23:8300 [Follower] entering Follower state
2015/11/04 10:50:08 [INFO] serf: EventMemberJoin: blackbeard 192.168.23.23
2015/11/04 10:50:08 [INFO] consul: adding server blackbeard (Addr: 192.168.23.23:8300) (DC: dc1)
2015/11/04 10:50:08 [INFO] serf: EventMemberJoin: blackbeard.dc1 192.168.23.23
2015/11/04 10:50:08 [INFO] consul: adding server blackbeard.dc1 (Addr: 192.168.23.23:8300) (DC: dc1)
2015/11/04 10:50:08 [ERR] agent: failed to sync remote state: No cluster leader
2015/11/04 10:50:09 [WARN] raft: Heartbeat timeout reached, starting election
2015/11/04 10:50:09 [INFO] raft: Node at 192.168.23.23:8300 [Candidate] entering Candidate state
2015/11/04 10:50:09 [INFO] raft: Election won. Tally: 1
2015/11/04 10:50:09 [INFO] raft: Node at 192.168.23.23:8300 [Leader] entering Leader state
2015/11/04 10:50:09 [INFO] consul: cluster leadership acquired
2015/11/04 10:50:09 [INFO] consul: New leader elected: blackbeard
2015/11/04 10:50:09 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2015/11/04 10:54:47 [INFO] agent: Deregistered service 'io.jbrodriguez.mediagui.scanner-47b07d7d-827e-11e5-81dc-d05099660776'
2015/11/04 10:56:59 [INFO] agent: Deregistered service 'io.jbrodriguez.mediagui.scanner-47b07d7d-827e-11e5-81dc-d05099660776'
2015/11/04 11:02:29 [INFO] serf: EventMemberJoin: wopr 192.168.23.72
2015/11/04 11:02:29 [INFO] consul: member 'wopr' joined, marking health alive
2015/11/04 11:12:30 [ERR] yamux: Failed to write header: use of closed network connection
This is the consult agent (client)
root@wopr:~# /boot/consul agent -data-dir /tmp/consul -join blackbeard.apertoire.org
==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
==> Starting Consul agent...
==> Starting Consul agent RPC...
==> Joining cluster...
Join completed. Synced with 1 initial agents
==> Consul agent running!
Node name: 'wopr'
Datacenter: 'dc1'
Server: false (bootstrap: false)
Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
Cluster Addr: 192.168.23.72 (LAN: 8301, WAN: 8302)
Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>
==> Log data will now stream in as it occurs:
2015/11/04 11:02:29 [INFO] serf: EventMemberJoin: wopr 192.168.23.72
2015/11/04 11:02:29 [INFO] agent: (LAN) joining: [blackbeard.apertoire.org]
2015/11/04 11:02:29 [INFO] serf: EventMemberJoin: blackbeard 192.168.23.23
2015/11/04 11:02:29 [INFO] agent: (LAN) joined: 1 Err: <nil>
2015/11/04 11:02:29 [ERR] agent: failed to sync remote state: No known Consul servers
2015/11/04 11:02:29 [INFO] consul: adding server blackbeard (Addr: 192.168.23.23:8300) (DC: dc1)
This is the server
root@wopr:~# /boot/scanner -logtostderr
I1104 11:05:08.097673 25766 scanner.go:18] after parse
I1104 11:05:08.097767 25766 server.go:96] Starting server io.jbrodriguez.mediagui.scanner id io.jbrodriguez.mediagui.scanner-d1b784a8-830d-11e5-a519-d05099660776
I1104 11:05:08.097850 25766 rpc_server.go:222] Listening on 0.0.0.0:58719
I1104 11:05:08.097909 25766 http_broker.go:80] Broker Listening on 0.0.0.0:57574
I1104 11:05:08.097968 25766 rpc_server.go:152] Registering node: io.jbrodriguez.mediagui.scanner-d1b784a8-830d-11e5-a519-d05099660776
I1104 11:10:09.004940 25766 handler.go:19] Received scanner.Scan request: %vfolders:"/mnt/user/films"
I1104 11:10:18.540523 25766 handler.go:19] Received scanner.Scan request: %vfolders:"/mnt/user/films"
I1104 11:10:31.820802 25766 handler.go:19] Received scanner.Scan request: %vfolders:"/mnt/user/films"
This is the output of http://localhost:8500/v1/catalog/service/io.jbrodriguez.mediagui.scanner, run on blackbeard
[
{
"Node": "io.jbrodriguez.mediagui.scanner-d1b784a8-830d-11e5-a519-d05099660776",
"Address": "0.0.0.0",
"ServiceID": "io.jbrodriguez.mediagui.scanner-d1b784a8-830d-11e5-a519-d05099660776",
"ServiceName": "io.jbrodriguez.mediagui.scanner",
"ServiceTags": [
"e={\"Name\":\"Scanner.Scan\",\"Request\":{\"Name\":\"Request\",\"Type\":\"Request\",\"Values\":[{\"Name\":\"Folders\",\"Type\":\"\",\"Values\":null}]},\"Response\":{\"Name\":\"Response\",\"Type\":\"Response\",\"Values\":[{\"Name\":\"Filenames\",\"Type\":\"\",\"Values\":null}]},\"Metadata\":{\"stream\":\"false\"}}",
"e={\"Name\":\"Debug.Health\",\"Request\":{\"Name\":\"Request\",\"Type\":\"Request\",\"Values\":null},\"Response\":{\"Name\":\"Response\",\"Type\":\"Response\",\"Values\":[{\"Name\":\"Status\",\"Type\":\"string\",\"Values\":null}]},\"Metadata\":{\"stream\":\"false\"}}"
],
"ServiceAddress": "",
"ServicePort": 58719
}
]
But the client fails to connect to server
⟩ ./test -logtostderr
I1104 11:10:28.359626 25332 main.go:25] req=&{service:io.jbrodriguez.mediagui.scanner method:Scanner.Scan contentType:application/octet-stream request:0xc82000b160}
W1104 11:10:28.360803 25332 main.go:31] Unable to connect to scanning service: %s{"id":"go.micro.client","code":500,"detail":"Error sending request: dial tcp 0.0.0.0:58719: getsockopt: connection refused","status":"Internal Server Error"}
If I run the client on wopr, it works
Hi @asim
After the start of the service, the consul.wath connection service is rejected.I don't know how to solve it:(
docker-compose.yml
sms:
build: .
environment:
- MICRO_REGISTRY_ADDRESS=consul:8500
volumes:
- ./logs/:/sms/logs/
- ./etc/env_strings.conf:/etc/env_strings.conf
links:
- consul
consul:
command: -server -bootstrap -rejoin
image: progrium/consul:latest
server log:
Creating sms_consul_1
Creating sms_sms_1
Attaching to sms_consul_1, sms_sms_1
consul_1 | ==> WARNING: Bootstrap mode enabled! Do not enable unless necessary
consul_1 | ==> WARNING: It is highly recommended to set GOMAXPROCS higher than 1
sms_1 | 2016/06/24 02:15:20 Listening on [::]:34066
consul_1 | ==> Starting raft data migration...
sms_1 | 2016/06/24 02:15:20 Broker Listening on [::]:46361
consul_1 | ==> Starting Consul agent...
sms_1 | 2016/06/24 02:15:20 Registering node: sms-7E67AC78
consul_1 | ==> Starting Consul agent RPC...
consul_1 | ==> Consul agent running!
consul_1 | Node name: '6dd7183da9a7'
consul_1 | Datacenter: 'dc1'
consul_1 | Server: true (bootstrap: true)
consul_1 | Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 53, RPC: 8400)
consul_1 | Cluster Addr: 192.168.42.2 (LAN: 8301, WAN: 8302)
consul_1 | Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
consul_1 | Atlas: <disabled>
consul_1 |
consul_1 | ==> Log data will now stream in as it occurs:
consul_1 |
consul_1 | 2016/06/24 02:15:20 [INFO] serf: EventMemberJoin: 6dd7183da9a7 192.168.42.2
consul_1 | 2016/06/24 02:15:20 [INFO] serf: EventMemberJoin: 6dd7183da9a7.dc1 192.168.42.2
consul_1 | 2016/06/24 02:15:20 [INFO] raft: Node at 192.168.42.2:8300 [Follower] entering Follower state
consul_1 | 2016/06/24 02:15:20 [INFO] consul: adding server 6dd7183da9a7 (Addr: 192.168.42.2:8300) (DC: dc1)
consul_1 | 2016/06/24 02:15:20 [INFO] consul: adding server 6dd7183da9a7.dc1 (Addr: 192.168.42.2:8300) (DC: dc1)
consul_1 | 2016/06/24 02:15:20 [ERR] agent: failed to sync remote state: No cluster leader
sms_1 | 2016/06/24 02:15:20 consul.watch: Watch (type: services) errored: Get http://127.0.0.1:8500/v1/catalog/services: dial tcp 127.0.0.1:8500: getsockopt: connection refused, retry in 5s
The title says it all
Trying to run the examples/server
(this one); the streaming handler's metadata is empty and the call itself does not quite work.
Support exponential backoff when making client Calls with retries. Make this configurable.
There are two Options. add it to client.Options or client.CallOptions.
This looks really nice.
Wondering about a few things in terms of roadmap:
LB: In terms of load balancer are you thinking about Fabio i wonder ? It hooks stright into Consul too.
https://github.com/eBay/fabio
Client micro-services:
HTTP/2 means we no longer have to try to push everything into a few http paylaods anymore.
So, it makes it possible to do web components as micro-services :)
Also i do versioning hashing this way, just to give you an idea:
module.exports = {
baseUrl : 'https://mysite.com/static/',
resources : {
vendor : {
version : 'vendor-d41d8cd98f.js',
pushWith : [ 'scout' ]
},
application : {
version : 'application-a32e3ec23d.js',
pushWith : [ 'scout' ]
},
secondary : {
version : 'secondary-e43b8ad12f.js',
pushWith : [ ]
}
}
};
When i exited the server, it did show the log info
Deregisterying node : .....
But when i checked at http://127.0.0.1:8500/v1/catalog/nodes, this node was being there, when is startting and restartig this node a few time, The site http://127.0.0.1:8500/v1/catalog/nodes gave alot of node like the screenshot below, and then the client couldnt understand which server is running and it received the 500 error code
The catalog:
http://i.imgur.com/Nd6YhtU.png
The result:
http://i.imgur.com/NOCYo7N.png
Please check and fix it
For a test-setup I tried to run multiple Server
instances in one application.
one := server.NewServer(server.Address("localhost:1600"), server.Registry(r), ...)
one.Start()
one.Register()
two := server.NewServer(server.Address("localhost:1601"), server.Registry(r), ...)
two.Start()
two.Register()
The second Start
fails with the error already listening on localhost:1600
. Is there a global shared state? I wonder why the address is needed anyway, since all (Registry, Transport and Broker) is run over nats (in my case). What is the local port needed for?
Many thanks in advance.
the method name and others info will set into endpoint and store in registry, but we could not set custom
Metadata
into this handler's Endpoint.
return ®istry.Endpoint{
Name: method.Name,
Request: request,
Response: response,
Metadata: map[string]string{
"stream": fmt.Sprintf("%v", stream),
},
}
my other service will list services from registry and get service handler from Endpoint
, if we could set more metadata into endpoint, my service could known more details about this handler, so I really need this feature. thanks!!!
Invalid request returns the following error. Can be found at server/rpc_server.go:88
1:21]
{
"id": "",
"code": 0,
"detail": "json: cannot unmarshal string into Go value of type int64",
"status": ""
}
Return a platform error instead go.micro.rpc.server
with code 400.
Hi,
Thank you for writing such amazing framework. However, I encountered a problem when I deploying my program: If a deploy my program in docker, other micro-based program (even micro cli outside docker) cannot access the program.
I looked into the problem, figured out that the problem come from "ServiceAddress" key in register, whose value is an internal IP inside of docker which impossible to be accessed from any remote environment.
registry have following value
{
"Node":"b75b88eb2e1a",
"Address":"192.168.100.17",
"ServiceID":"tech.ngs.sms-a4d6a869-38fc-11e6-895d-0242ac110002",
"ServiceName":"tech.ngs.sms",
"ServiceTags":[
"t-789caa562a294acc2b2ec82f2a51b252ca28292950aa05040000ffff4f930768",
"t-789caa564a2acacf4e2d52b252ca28292950aa05040000ffff364d0600",
"t-789caa562a4e2d2a4b2d52b2522a2a4856aa05040000ffff30dd0597",
"t-789caa562a4a4dcf2c2e29aa54b2524acecf2b2ecd51aa05040000ffff560f07c8",
"e-789c9492b14ec3400c86dfe5e68801b6ac1421862e3d850531b889a92c39be625f2a219477c7112569a920e916399f2d7ffeef3308b418ca10d731a21ea846bb8928cd230a2ad55e0e45507cefd07228477cfafd0c4c0d644ab2395245c81ffb79e800ec5fa17c198756865a554fab69826525d99dc0d231f7c5d8b14e5b625ccedfa766967eed0761db27313c31768bcd4f759c705ebc348a5ded07b5a9619b12ffbdde836ad2e536c75b2e919a9a626bbe72c7f9b7c577e9d2e17c3c49bebbfd2710b785dd15ebac3003f17c26432aadb3fe8a6048c53984d61bde80fdf87dff150000ffffa6acf21e",
"e-789c9451b14ec3400cfd17cf11036c5981314b6f440c6e63a2939cbb72f65542e8fe1d07a24b4a15b5ddace7f79edeb3bf21e048d082eb9ca374f207920747a1dfd1e04529190e0d24facc240a6de5aff6bb79d9807e1d377727649ba07dab165ddc7ba645269a7c1856d490994b53f9cfb1bfca7e2f535839c620b44afb9b6446abc3397899cfe5835d4316c13e46de8ef79a524cb7b7992f734ba945e446b1c899f57f8b3fe8b2c3b9bd0ffaf4b8edde595b1cee88f3428a9eafff64faca68dc1e15a7af188f7034c107b21dbf949f000000ffff0948db07",
"v-789c32d43300040000ffff01220090"
],
"ServiceAddress":"172.17.0.2",
"ServicePort":10000,
"ServiceEnableTagOverride":false,
"CreateIndex":4003,
"ModifyIndex":4003
}
I implemented following code to init the service
b := broker.NewBroker(
broker.Addrs(":10001"),
)
srv := server.NewServer(
server.Name(SERVICE_NAME),
server.Address(":10000"),
server.Broker(b),
)
service := micro.NewService(
micro.Server(srv),
Thanks
Draw a diagram to demonstrate the interaction and flow between:
A lot of things are loaded on runtime through variables, like var DefaultClient Client = newRPCClient()
.
This makes it easy to swap this element out, but also causes an issue with configuration. This gist demonstrates the issue.
More concrete, the way I stumbled upon this is by configuring a different port to the consul agent. To do that, I overwrote registry.DefaultRegistry
like this: registry.DefaultRegistry = registry.NewRegistry([]string{"127.0.0.1:9123"})
The service would still look for port 8500 because within the server package, we load a default server. At this point, the new configuration - the one we define within our application logic - isn't loaded yet and will thus fall back to the default port.
Is there interest in adding a msgpack codec in the core? I implemented one that I can submit a PR for.
we know it is common that server return an error, but blacklist of default selector limit 3 error, then we have no services available. can we turn off blacklist?
Is there any plan to support the grpc transport?
Just as the subject, go-micro rpc client should provide the connection pool for the target server port.
I change the sample code go-micro/example/client/codegen/codegen.go to test the performance of go-micro rpc.
The changed snippet of code as below:
func call(i int, wg *sync.WaitGroup){
defer wg.Done()
for j := 0; j < i; j++ {
rsp, err := cl.Call(context.Background(), &example.Request{Name: "John"})
if err != nil {
fmt.Println("call err: ", err, rsp)
return
}
}
//fmt.Println("Call:", i, "rsp:", rsp.Msg)
}
func main() {
cmd.Init()
wg := new(sync.WaitGroup)
var start, stop int
start = time.Now().Nanosecond()
fmt.Println("\n--- Call example ---\n")
wg.Add(5)
go call(10000, wg)
go call(10000, wg)
go call(10000, wg)
go call(10000, wg)
go call(10000, wg)
wg.Wait()
stop = time.Now().Nanosecond()
println("Call example Time used: ", stop-start)
}
During the client running, the console will throw exception as below:
call err: {"id":"go.micro.client","code":500,"detail":"Error sending request: dial tcp 10.8.0.10:50661: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.","status":"Internal Server Error"} <nil>
call err: {"id":"go.micro.client","code":500,"detail":"Error sending request: dial tcp 10.8.0.10:50661: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.","status":"Internal Server Error"} <nil>
call err: {"id":"go.micro.client","code":500,"detail":"Error sending request: dial tcp 10.8.0.10:50661: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.","status":"Internal Server Error"} <nil>
call err: {"id":"go.micro.client","code":500,"detail":"Error sending request: dial tcp 10.8.0.10:50661: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.","status":"Internal Server Error"} <nil>
call err: {"id":"go.micro.client","code":500,"detail":"Error sending request: dial tcp 10.8.0.10:50661: bind: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.","status":"Internal Server Error"} <nil>
I checked the go-micro rpc_client.go code, found that every remote method call will create the tcp/ip connection, and after revocation this connection will be destroied. That will cause bad performance issue in the high load system.
rpc_client.go code snippet:
func (r *rpcClient) call(ctx context.Context, address string, req Request, resp interface{}) error {
......
c, err := r.opts.Transport.Dial(address, transport.WithTimeout(r.opts.DialTimeout))
if err != nil {
return errors.InternalServerError("go.micro.client", fmt.Sprintf("Error sending request: %v", err))
}
var once sync.Once
......
Hope go-micro client can provide connection pool, only that go-micro can be used in the real production system.
Is there any interest in etcd support?
I saw you are using the deprecated Go library for etcd. They deprecated that one and developed a new one: https://github.com/coreos/etcd/tree/master/client
Just letting you know. :)
Passed in options are only used for command line flags https://github.com/micro/go-micro/blob/master/service.go#L63. Apply them where cli flags are not used.
In the case of RabbitMQ we can translate a broker.Message directly into an amqp.Delivery. In most other cases we're only allowed to transport []byte. We should not assume JSON for these cases but instead allow a Codec to be specified that defaults to the JSON codec.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.