tomerfiliba-org / rpyc Goto Github PK
View Code? Open in Web Editor NEWRPyC (Remote Python Call) - A transparent and symmetric RPC library for python
Home Page: http://rpyc.readthedocs.org
License: Other
RPyC (Remote Python Call) - A transparent and symmetric RPC library for python
Home Page: http://rpyc.readthedocs.org
License: Other
In the definition of safettrs, in python 3k, __nonzero__
should become __ bool__
and next
should become __next__
The example for the upload_module function in classic rpyc has a cut and paste error.
It only contains a call to upload_package.
Very similar to #32, which was closed. If I use a BgServingThread on the "client" and register a callback with the server, 100 invocations of the callback take 10 seconds. I tried updating the SLEEP_INTERVAL and even went so far as to remove the sleep in _bg_server and the sleep in the poll method method of the Win32PipeStream. However, no matter what I changed, I couldn't improve the turnaround time of a simple callback to anything faster than .1 seconds per invocation.
Am I missing something? I'm on Win7 - 64-bit Python 2.6
In netref.py the last statement of inspect_methods is
return methods.items()
In python 2k this will return a list and in python 3k an iterator. This make cause problems in user code. I suggest this statement to be replaced with:
return list(methods.items())
which will return a list in both cases (at a minor loss in efficiency).
firstway Show activity 10/12/10
I got a Exception(KeyError) in protocol.py
but it seem that not raise by remote server(I am not sure),
how can i find where the KeyError raise?
Traceback (most recent call last):
File "/home/admin/search_test/bin/launchpad/resource_keeper_helper.py", line 24, in get_next
dir_list = [str(n) for n in rk_instance.query(os.path.join(query,'*'))]
File "/home/launch/.python/lib/python2.5/site-packages/rpyc-3.0.7-py2.5.egg/rpyc/core/netref.py", line 123, in __call__
File "/home/launch/.python/lib/python2.5/site-packages/rpyc-3.0.7-py2.5.egg/rpyc/core/netref.py", line 45, in syncreq
File "/home/launch/.python/lib/python2.5/site-packages/rpyc-3.0.7-py2.5.egg/rpyc/core/protocol.py", line 342, in sync_request
KeyError: 46912509222496
I work with RPYC classic server (see code below) under Windows XP.
When I have BgServingThread, 100 calls of execute() takes about 10 seconds.
Without BgServingThread, 100 calls of execute() takes about 0.01 seconds.
Why? What can I do to have both BgServingThread running and execute() working fast?
import rpyc
import time
c = rpyc.classic.connect("localhost")
t = rpyc.BgServingThread(c) # delete this line for fast execute
start = time.time()
for i in range(100):
c.execute("newObj = %d" % (i))
stop = time.time()
print "added %d simple objects one by one, %f seconds" % (100, stop - start)
t.stop() # delete this line if deleted line above
Need any ideas...
mman Show activity 3/1/10
Hello all,
I'm using the ForkingServer and I have noticed that the server-side
hangs
when a client disconnects. Looking through the code I found that the
SIGCHLD
handler inside the ForkingServer class needs some checks. Below is a
patch
that fixes the problem. Tomer, feel free to include this in the rpyc
distribution.
I'm using python 2.6.4 here.
diff -ur rpyc-3.0.7-py2.6/utils/server.py rpyc-3.0.7-py2.6.my/utils/
server.py
--- rpyc-3.0.7-py2.6/utils/server.py 2009-09-22 14:38:28.000000000
+0300
+++ rpyc-3.0.7-py2.6.my/utils/server.py 2010-03-01 12:44:36.000000000
+0200
@@ -202,11 +202,15 @@
def _handle_sigchld(signum, unused):
try:
while True:
- os.waitpid(-1, os.WNOHANG)
+ r = os.waitpid(-1, os.WNOHANG)
+ print "waitpid returned %s" % repr(r)
+ if r == (0, 0):
+ time.sleep(1)
+ else:
+ print "our child %d terminated" % r[0]
+ break
except OSError:
pass
- # re-register signal handler (see man signal(2), under
Portability)
- signal.signal(signal.SIGCHLD, self._handle_sigchld)
def _accept_method(self, sock):
pid = os.fork()
According to http://rpyc.sourceforge.net/docs/secure-connection.html
And then, establishing a connection over SSH is a one-liner:
conn = rpyc.ssh_connect(sshctx, 12345)
but rpyc.ssh_connect() does not exist. One must do
from rpyc.utils.factory import ssh_connect
to access ssh_connnect.
The fix is simple, I'm working on a pull request right now.
EDIT: the pull request is #53
[tangent]
Thanks for all of your hard work. I love the new ssh integration. This is so much cleaner than what I used to do. Keep it up!
[/tangent]
i added ssl support (using the builtin ssl module).
for older versions of python (2.3-2.5), people could install http://pypi.python.org/pypi/ssl/
needs some testing
https://groups.google.com/d/msg/rpyc/IccInrL216E/eEGO-psKFNIJ
Aviv Ben-Yosef Show activity 11/11/09
Hello,
I've been trying to use the rpyc.classic.connect_subproc function, and
found out that it passes Popen the string 'python' as the executable.
The problem is that on machines where 'python' is not the name of the
python executable, or isn't the executable that was used to execute
the code in the first place, it won't work.
This should be changed to sys.executable instead (I can send in the
patch if needed).
Would it be possible to make rpyc importable as a sub-package (with relative imports internally)?
Hi,
I have some little suggestions for server.py.
The first is rather trivial. It would be nice to see in the logs who
has used the server. Of course one can create an additional log entry
when the service starts, but I think it is nicer if it is included in
Server._serve_client().
def _serve_client(self, sock, credentials):
h, p = sock.getpeername()
#{{ my modification
if (type(credentials)!=type('')) or (credentials==''):
self.logger.info("welcome %s:%s", h, p)
else:
self.logger.info("welcome %s [%s:%s]", credentials, h, p)
#}}
try:
config = dict(self.protocol_config, credentials = credentials)
conn = Connection(self.service, Channel(SocketStream(sock)),
config = config, _lazy = True)
conn._init_service()
conn.serve_all()
finally:
self.logger.info("goodbye %s:%s", h, p)
My second suggestion involves Server._authenticate_and_serve_client().
If I run the server under Linux,
I observed that tlslite raises an exception if the client does not
properly closes the connection. This does not happen if I run the
server under windows. Therefore I suggest to add some error handling
around the call of _serve_client() and log the exception and the
traceback properly.
def _authenticate_and_serve_client(self, sock):
try:
if self.authenticator:
h, p = sock.getpeername()
try:
sock, credentials = self.authenticator(sock)
except AuthenticationError:
self.logger.info("%s:%s failed to authenticate, rejecting connection", h, p)
return
else:
self.logger.info("%s:%s authenticated successfully", h, p)
else:
credentials = None
#{{ my modification
try:
self._serve_client(sock, credentials)
except Exception,e:
etype = sys.exc_type
excinfo = sys.exc_info()
try:
ename = etype.__name__
except AttributeError:
ename = etype
self.logger.warn("Exception: %s",ename)
self.logger.traceback(excinfo)
#}}
finally:
try:
sock.shutdown(socket.SHUT_RDWR)
except Exception:
pass
sock.close()
self.clients.discard(sock)
reported by rudiger. here's a code snippet to reproduce the problem:
>>> import rpyc
>>> c=rpyc.classic.connect("pro114")
>>> import platform
>>> platform.architecture() # client is 32 bit ubuntu 11.04, python 2.7
('32bit', 'ELF')
>>> c.modules.platform.architecture() # server is 64 bit generic linux, python 2.5
('64bit', 'ELF')
>>> c.execute("""def f(lst):
... for x in lst[1:]:
... print x
... """)
>>> l=[5,6,7,8,9]
>>> c.namespace["f"]
>>> c.namespace["f"](l)
======= Remote traceback =======
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/rpyc/core/protocol.py", line 227, in _dispatch_request
res = self._HANDLERS[handler](self, *args)
File "/usr/lib/python2.7/dist-packages/rpyc/core/protocol.py", line 445, in _handle_callattr
return self._handle_getattr(oid, name)(*args, **dict(kwargs))
OverflowError: Python int too large to convert to C long
--------------------------------
Traceback (most recent call last):
File "rpyc/core/protocol.py", line 227, in _dispatch_request
File "rpyc/core/protocol.py", line 433, in _handle_call
File "", line 2, in f
File "rpyc/core/netref.py", line 131, in method
File "rpyc/core/netref.py", line 42, in syncreq
File "rpyc/core/protocol.py", line 347, in sync_request
OverflowError: Python int too large to convert to C long
======= Local exception ========
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python2.7/dist-packages/rpyc/core/netref.py", line 125, in __call__
return syncreq(_self, consts.HANDLE_CALL, args, kwargs)
File "/usr/lib/python2.7/dist-packages/rpyc/core/netref.py", line 42, in syncreq
return conn().sync_request(handler, oid, *args)
File "/usr/lib/python2.7/dist-packages/rpyc/core/protocol.py", line 347, in sync_request
raise obj
OverflowError: Python int too large to convert to C long
very weird.
Below is with RPyC V3.1.0
rpyc_vdbconf.py --help
Traceback (most recent call last):
File "rpyc_vdbconf.py", line 20, in
from rpyc.utils.authenticators import VdbAuthenticator
ImportError: cannot import name VdbAuthenticator
a better solution would be to remove change @staticmethod to @classmethod and rename "self" as "cls",
but any working resolution is fine for now :)
i will release 3.0.8 in the summer, after my exams, to resolve all these little issues.
btw, if someone would volunteer to fix these quirks, i'll be glad to add him as a committer to github and sourceforge.
-tomer
An NCO and a Gentleman
On Fri, May 7, 2010 at 17:30, Tim Arnold [email protected] wrote:
I guess you don't want a patch since it's already fixed in
development. I'm not sure your fix is the same as mine since I don't
have a deep understanding of the code. But just in case someone wants
it, this worked for me and my server no longer dies.
I changed the _handle_sigchld method in utils/server.py to no longer
be a static method. It now looks like this:
# @staticmethod
def _handle_sigchld(self,signum, unused):
try:
while True:
os.waitpid(-1, os.WNOHANG)
except OSError:
pass
# re-register signal handler (see man signal(2), under
Portability)
signal.signal(signal.SIGCHLD, self._handle_sigchld)
Commenting out the @static_method decorator and adding self to the
signature.
As I said, everything seems to be working fine now.
any comments, suggestions are very welcome.
--Tim
On May 5, 3:53 pm, Tim Arnold [email protected] wrote:
On May 5, 3:39 pm, Alex Grönholm [email protected] wrote:
5.5.2010 21:30, Tim Arnold kirjoitti:> Hi,
I see in a few past posts that this error has been fixed in
development, but there has not yet been released. The traceback is
below. Is there a patch available, or could one be made available to
get this fixed? Or maybe a new release is coming soon?I asked Tomer to release an updated version, but he hasn't acted on it.
The 3.5.x series isn't going to come out yet either due to me being busy
finishing my thesis.Thanks for letting me know, I'll see if I can debug it myself.
thanks,
--Timmany thanks,
--Tim ArnoldFile "services.py", line 41, in
s.start()
File "/usr/local/lib/python2.7/site-packages/rpyc/utils/server.py",
line 167, in start
self.accept()
File "/usr/local/lib/python2.7/site-packages/rpyc/utils/server.py",
line 74, in accept
sock, (h, p) = self.listener.accept()
File "/usr/local/lib/python2.7/site-packages/rpyc/utils/server.py",
line 209, in _handle_sigchld
signal.signal(signal.SIGCHLD, self._handle_sigchld)
NameError: global name 'self' is not defined
tlslite has been unmaintained for a long time, and since we've got decent ssl
support since python 2.6, and the ssl package for python 2.2-2.5, there's no longer need to keep the tlslite integration.
(sorry for the crappy blocks, this is the best I could get in markdown.... why does every site need its own parser? sigh :-) )
Paths have been slightly redacted.
From py3 to py2:
Client:
Python 3.1.2 (r312:79147, Sep 27 2010, 09:45:41)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import rpyc
>>> rpyc.__version__
(3, 2, 1)
>>> c = rpyc.classic.connect('localhost')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "(...)/lib/python3.1/site-packages/rpyc/utils/classic.py", line 67, in connect
return factory.connect(host, port, SlaveService, ipv6 = ipv6)
File "(...)/lib/python3.1/site-packages/rpyc/utils/factory.py", line 84, in connect
return connect_stream(s, service, config)
File "(...)/lib/python3.1/site-packages/rpyc/utils/factory.py", line 45, in connect_stream
return connect_channel(Channel(stream), service = service, config = config)
File "(...)/lib/python3.1/site-packages/rpyc/utils/factory.py", line 34, in connect_channel
return Connection(service, channel, config = config)
File "(...)/lib/python3.1/site-packages/rpyc/core/protocol.py", line 136, in __init__
self._init_service()
File "(...)/lib/python3.1/site-packages/rpyc/core/protocol.py", line 139, in _init_service
self._local_root.on_connect()
File "(...)/lib/python3.1/site-packages/rpyc/core/service.py", line 143, in on_connect
self._conn.builtin = self._conn.modules.builtins
File "(...)/lib/python3.1/site-packages/rpyc/core/service.py", line 114, in __getattr__
return self[name]
File "(...)/lib/python3.1/site-packages/rpyc/core/service.py", line 111, in __getitem__
self.__cache[name] = self.__getmodule(name)
TypeError: 'b'instancemethod'' object is not callable
Server:
$ python2 ./rpyc_classic.py
INFO:SLAVE/18812:server started on [0.0.0.0]:18812
INFO:SLAVE/18812:accepted 127.0.0.1:38119
INFO:SLAVE/18812:welcome [127.0.0.1]:38119
Other way around:
Client:
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import rpyc
>>> rpyc.__version__
(3, 2, 1)
>>> c = rpyc.classic.connect('localhost')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "(...)/lib/python2.6/site-packages/rpyc/utils/classic.py", line 67, in connect
return factory.connect(host, port, SlaveService, ipv6 = ipv6)
File "(...)/lib/python2.6/site-packages/rpyc/utils/factory.py", line 84, in connect
return connect_stream(s, service, config)
File "(...)/lib/python2.6/site-packages/rpyc/utils/factory.py", line 45, in connect_stream
return connect_channel(Channel(stream), service = service, config = config)
File "(...)/lib/python2.6/site-packages/rpyc/utils/factory.py", line 34, in connect_channel
return Connection(service, channel, config = config)
File "(...)/lib/python2.6/site-packages/rpyc/core/protocol.py", line 136, in __init__
self._init_service()
File "(...)/lib/python2.6/site-packages/rpyc/core/protocol.py", line 139, in _init_service
self._local_root.on_connect()
File "(...)/lib/python2.6/site-packages/rpyc/core/service.py", line 145, in on_connect
self._conn.builtin = self._conn.modules.__builtin__
File "(...)/lib/python2.6/site-packages/rpyc/core/service.py", line 114, in __getattr__
return self[name]
File "(...)/lib/python2.6/site-packages/rpyc/core/service.py", line 111, in __getitem__
self.__cache[name] = self.__getmodule(name)
File "(...)/lib/python2.6/site-packages/rpyc/core/netref.py", line 194, in __call__
return syncreq(_self, consts.HANDLE_CALL, args, kwargs)
File "(...)/lib/python2.6/site-packages/rpyc/core/netref.py", line 69, in syncreq
return conn().sync_request(handler, oid, *args)
File "(...)/lib/python2.6/site-packages/rpyc/core/protocol.py", line 423, in sync_request
self.serve(0.1)
File "(...)/lib/python2.6/site-packages/rpyc/core/protocol.py", line 371, in serve
data = self._recv(timeout, wait_for_lock = True)
File "(...)/lib/python2.6/site-packages/rpyc/core/protocol.py", line 329, in _recv
data = self._channel.recv()
File "(...)/lib/python2.6/site-packages/rpyc/core/channel.py", line 50, in recv
header = self.stream.read(self.FRAME_HEADER.size)
File "(...)/lib/python2.6/site-packages/rpyc/core/stream.py", line 169, in read
raise EOFError("connection closed by peer")
EOFError: connection closed by peer
Server:
$ python3 ./rpyc_classic.py
INFO:SLAVE/18812:server started on [0.0.0.0]:18812
INFO:SLAVE/18812:accepted 127.0.0.1:53998
INFO:SLAVE/18812:welcome [127.0.0.1]:53998
INFO:SLAVE/18812:goodbye [127.0.0.1]:53998
ERROR:SLAVE/18812:client connection terminated abruptly
Traceback (most recent call last):
File "(...)/lib/python3.1/site-packages/rpyc/utils/server.py", line 165, in _authenticate_and_serve_client
self._serve_client(sock2, credentials)
File "(...)/lib/python3.1/site-packages/rpyc/utils/server.py", line 189, in _serve_client
conn._init_service()
File "(...)/lib/python3.1/site-packages/rpyc/core/protocol.py", line 139, in _init_service
self._local_root.on_connect()
File "(...)/lib/python3.1/site-packages/rpyc/core/service.py", line 143, in on_connect
self._conn.builtin = self._conn.modules.builtins
File "(...)/lib/python3.1/site-packages/rpyc/core/service.py", line 114, in __getattr__
return self[name]
File "(...)/lib/python3.1/site-packages/rpyc/core/service.py", line 111, in __getitem__
self.__cache[name] = self.__getmodule(name)
TypeError: 'b'instancemethod'' object is not callable
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.1/threading.py", line 516, in _bootstrap_inner
self.run()
File "/usr/lib/python3.1/threading.py", line 469, in run
self._target(*self._args, **self._kwargs)
File "(...)/lib/python3.1/site-packages/rpyc/utils/server.py", line 165, in _authenticate_and_serve_client
self._serve_client(sock2, credentials)
File "(...)/lib/python3.1/site-packages/rpyc/utils/server.py", line 189, in _serve_client
conn._init_service()
File "(...)/lib/python3.1/site-packages/rpyc/core/protocol.py", line 139, in _init_service
self._local_root.on_connect()
File "(...)/lib/python3.1/site-packages/rpyc/core/service.py", line 143, in on_connect
self._conn.builtin = self._conn.modules.builtins
File "(...)/lib/python3.1/site-packages/rpyc/core/service.py", line 114, in __getattr__
return self[name]
File "(...)/lib/python3.1/site-packages/rpyc/core/service.py", line 111, in __getitem__
self.__cache[name] = self.__getmodule(name)
TypeError: 'b'instancemethod'' object is not callable
refer to http://bitbucket.org/agronholm/rpyc/
The statement
raise TypeError("got unexpected keyword argument %r" % (list(kwargs.keys())[0],))
will raise a dict_keys' object does not support indexing error
Replace that statement with
raise TypeError("got unexpected keyword argument %r" % (list(kwargs.keys())[0],))
Rüdiger Show activity Feb 14
Hi,
please ignore my former post about performance issue. The issue was in
another thread and not in the client server communication thread.
I would like to propose a threaded logger (here was actually my
problem) that uses a queue to speed up the client server interaction.
If the network is fast, then the writing of the server log is a bottle
neg. Either one switches off the logging or one uses a separate thread
for the logging. This is what I would like to propose.
The threaded logger would look like this:
class Logger(object):
def __init__(self, name, console = sys.stderr, file = None,
show_name = True,
show_pid = False, show_tid = False, show_date = False, show_time =
True,
show_label = True, quiet = False):
self.name = name
self.console = console
self.file = file
self.show_name = show_name
self.show_pid = show_pid
self.show_tid = show_tid
self.show_date = show_date
self.show_time = show_time
self.show_label = show_label
self.quiet = quiet
self.filter = set()
# additions for the threaded logger start here ==========
self.QueueEvent=THG.Event()
self.QueueEvent.clear()
self.QueueLock=TH.allocate_lock()
self.Queue=deque([])
self.useQueue=True #one might use an additional parameter
use_queue = True
self.QueueThd=THG.Thread(target=self.DoQueue)
self.QueueThd.setDaemon(True)
self.QueueThd.start()
def StopQueue(self):
self.useQueue=False
self.QueueEvent.set()
self.QueueThd.join()
def DoQueue(self):
while self.useQueue:
self.QueueEvent.wait()
self.QueueEvent.clear()
self.QueueLock.acquire()
s=''
while len(self.Queue):
s+=self.Queue.popleft()
self.QueueLock.release()
self._Write(s)
def Write(self,s):
if self.useQueue:
self.QueueLock.acquire()
self.Queue.append(s)
self.QueueLock.release()
self.QueueEvent.set()
else:
self._Write(s)
def _Write(self,text):
if self.console:
self.console.write(text)
if self.file:
self.file.write(text)
# additions for the threaded logger end here ==========
def log(self, label, msg):
if label in self.filter:
return
header = []
if self.show_name:
header.append("%-10s" % (self.name,))
if self.show_label:
header.append("%-10s" % (label,))
if self.show_date:
header.append(time.strftime("%Y-%m-%d"))
if self.show_time:
header.append(time.strftime("%H:%M:%S"))
if self.show_pid:
header.append("pid=%d" % (os.getpid(),))
if self.show_tid:
header.append("tid=%d" % (thread.get_ident(),))
if header:
header = "[" + " ".join(header) + "] "
sep = "\n...." + " " * (len(header) - 4)
text = header + sep.join(msg.splitlines()) + "\n"
# here the queued Write() procedure must be called
self.Write(text)
def debug(self, msg, *args, **kwargs):
if self.quiet: return
if args: msg %= args
self.log("DEBUG", msg)
def info(self, msg, *args, **kwargs):
if self.quiet: return
if args: msg %= args
self.log("INFO", msg)
def warn(self, msg, *args, **kwargs):
if self.quiet: return
if args: msg %= args
self.log("WARNING", msg)
def error(self, msg, *args, **kwargs):
if args: msg %= args
self.log("ERROR", msg)
def traceback(self, excinfo = None):
if not excinfo:
excinfo = sys.exc_info()
self.log("TRACEBACK",
"".join(traceback.format_exception(*excinfo)))
Maxx Show activity 9/8/10
Hello.
In the release, still used tlslite, and svn-version is very crude and
normally does not work with encrypted connections.
I was a little changed tlslite that he used to load modules hashlib
sha and md5.
I enclose a patch for tlslite.
diff -Naur tlslite-0.3.8.orig/tlslite/mathtls.py tlslite-0.3.8/tlslite/
mathtls.py
--- tlslite-0.3.8.orig/tlslite/mathtls.py 2004-10-06
09:01:15.000000000 +0400
+++ tlslite-0.3.8/tlslite/mathtls.py 2010-09-08 14:02:23.000000000
+0400
@@ -4,8 +4,8 @@
from utils.cryptomath import *
import hmac
-import md5
-import sha
+from tlslite.utils.hashes import md5
+from tlslite.utils.hashes import sha
#1024, 1536, 2048, 3072, 4096, 6144, and 8192 bit groups]
goodGroupParameters =
[(2,0xEEAF0AB9ADB38DD69C33F80AFA8FC5E86072618775FF3C0B9EA2314C9C256576D674DF7496EA81D3383B4813D692C6E0E0D5D8E250B98BE48E495C1D6089DAD15DC7D7B46154D6B6CE8EF4AD69B15D4982559B297BCF1885C529F566660E57EC68EDBC3C05726CC02FD4CBF4976EAA9AFD5138FE8376435B9FC61D2FC0EB06E3),
\
@@ -113,7 +113,7 @@
digestmod: A module supporting PEP 247. Defaults to the md5
module.
"""
if digestmod is None:
- import md5
+ from tlslite.utils.hashes import md5
digestmod = md5
if key == None: #TREVNEW - for faster copying
diff -Naur tlslite-0.3.8.orig/tlslite/messages.py tlslite-0.3.8/
tlslite/messages.py
--- tlslite-0.3.8.orig/tlslite/messages.py 2004-10-06
09:01:24.000000000 +0400
+++ tlslite-0.3.8/tlslite/messages.py 2010-09-08 13:53:14.771817811
+0400
@@ -8,8 +8,8 @@
from X509 import X509
from X509CertChain import X509CertChain
-import sha
-import md5
+from tlslite.utils.hashes import md5
+from tlslite.utils.hashes import sha
class RecordHeader3:
def __init__(self):
@@ -558,4 +558,4 @@
return self
def write(self):
- return self.bytes
\ В конце файла нет новой строки
+ return self.bytes
diff -Naur tlslite-0.3.8.orig/tlslite/TLSRecordLayer.py tlslite-0.3.8/
tlslite/TLSRecordLayer.py
--- tlslite-0.3.8.orig/tlslite/TLSRecordLayer.py 2005-02-22
08:31:41.000000000 +0300
+++ tlslite-0.3.8/tlslite/TLSRecordLayer.py 2010-09-08
13:53:14.841817816 +0400
@@ -12,8 +12,8 @@
from utils.cryptomath import getRandomBytes
from utils import hmac
from FileObject import FileObject
-import sha
-import md5
+from tlslite.utils.hashes import md5
+from tlslite.utils.hashes import sha
import socket
import errno
import traceback
diff -Naur tlslite-0.3.8.orig/tlslite/utils/cryptomath.py
tlslite-0.3.8/tlslite/utils/cryptomath.py
--- tlslite-0.3.8.orig/tlslite/utils/cryptomath.py 2004-10-06
09:02:53.000000000 +0400
+++ tlslite-0.3.8/tlslite/utils/cryptomath.py 2010-09-08
14:02:49.000000000 +0400
@@ -6,7 +6,7 @@
import math
import base64
import binascii
-import sha
+from tlslite.utils.hashes import sha
from compat import *
diff -Naur tlslite-0.3.8.orig/tlslite/utils/hashes.py tlslite-0.3.8/
tlslite/utils/hashes.py
--- tlslite-0.3.8.orig/tlslite/utils/hashes.py 1970-01-01
03:00:00.000000000 +0300
+++ tlslite-0.3.8/tlslite/utils/hashes.py 2010-09-08
13:40:59.000000000 +0400
@@ -0,0 +1,18 @@
+from new import module
+from hashlib import md5 as md5new
+md5 = module('md5')
+md5.md5 = md5new
+md5.new = md5new
+md5.block_size = md5new().block_size
+md5.blocksize = md5.block_size
+md5.digest_size = md5new().digest_size
+md5.digestsize = md5.digest_size
+from hashlib import sha1
+sha = module('sha')
+sha.sha = sha1
+sha.new = sha1
+sha.block_size = sha1().block_size
+sha.blocksize = sha.block_size
+sha.digest_size = sha1().digest_size
+sha.digestsize = sha.digest_size
+
diff -Naur tlslite-0.3.8.orig/tlslite/utils/hmac.py tlslite-0.3.8/
tlslite/utils/hmac.py
--- tlslite-0.3.8.orig/tlslite/utils/hmac.py 2004-03-16
08:37:48.000000000 +0300
+++ tlslite-0.3.8/tlslite/utils/hmac.py 2010-09-08 13:50:48.301877875
+0400
@@ -29,7 +29,7 @@
digestmod: A module supporting PEP 247. Defaults to the md5
module.
"""
if digestmod is None:
- import md5
+ from tlslite.utils.hashes import md5
digestmod = md5
if key == None: #TREVNEW - for faster copying
diff -Naur tlslite-0.3.8.orig/tlslite/utils/__init__.py tlslite-0.3.8/
tlslite/utils/__init__.py
--- tlslite-0.3.8.orig/tlslite/utils/__init__.py 2004-10-06
09:02:13.000000000 +0400
+++ tlslite-0.3.8/tlslite/utils/__init__.py 2010-09-08
13:45:26.000000000 +0400
@@ -9,6 +9,7 @@
"Cryptlib_TripleDES",
"cryptomath: cryptomath module",
"dateFuncs",
+ "hashes",
"hmac",
"JCE_RSAKey",
"compat",
diff -Naur tlslite-0.3.8.orig/tlslite/utils/jython_compat.py
tlslite-0.3.8/tlslite/utils/jython_compat.py
--- tlslite-0.3.8.orig/tlslite/utils/jython_compat.py 2005-02-22
07:41:43.000000000 +0300
+++ tlslite-0.3.8/tlslite/utils/jython_compat.py 2010-09-08
13:50:48.321818422 +0400
@@ -1,7 +1,7 @@
"""Miscellaneous functions to mask Python/Jython differences."""
import os
-import sha
+from tlslite.utils.hashes import sha
if os.name != "java":
BaseException = Exception
tlslite is public domain (http://trevp.net/tlslite/readme.txt), so i'll just port it to python 2.7 and 3.2 (mostly convert "import md5" to "from hashlib import md5") and release the sources
don't forget that: https://bitbucket.org/agronholm/rpyc/src/4468fd2877d4/rpyc/core/stream.py
sys.maxint is used in three places in files netref.py protocol.py and stream.py. In python 3k it should be replaced with sys.maxsize.
Hi,
I get the following message whenever try/except'ing to access a non existing object method.
rpyc/core/vinegar.py:42: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
attrval = getattr(val, name)
Hi,
I want to safely connect to remote service through tls. I need authorization from server.
I use
utils.factory.ssl_connect(host, port, ca_certs="path_to_ca_file')
The problem is that this connection doesn't check authority. Whenever I put a path to properly CA certificate file or not (certificate which don't verify server's certificate) I'm able to connect to remote server and do some operations.
I guess the issue is in factory/ssl.connection.py file. There is no cert_reqs argument passed to args_dict which goes to ssl.wrap_socket.
One solution is to add
ssl_kwargs["cert_reqs"] = ssl.CERT_REQUIRED
diff out:
123a124
ssl_kwargs = {"server_side" : False}
if keyfile:
ssl_kwargs["keyfile"] = keyfile
if certfile:
ssl_kwargs["certfile"] = certfile
if ca_certs:
ssl_kwargs["ca_certs"] = ca_certs
> ssl_kwargs["cert_reqs"] = ssl.CERT_REQUIRED
if ssl_version:
ssl_kwargs["ssl_version"] = ssl_version
PyScripter Show activity 12/24/09
It appears that multiple async_requests are executed in reverse order.
Run the following test program:
source ="""
class AsyncStream(object):
def __init__(self, stream):
import rpyc
from rpyc.core.consts import HANDLE_CALL
assert isinstance(stream, rpyc.BaseNetref)
self._stream = stream
self.origwrite = stream.write
self.conn = object.__getattribute__(self.origwrite,
"____conn__")
self.oid = object.__getattribute__(self.origwrite,
"____oid__")
self.HANDLE_CALL = HANDLE_CALL
def __getattr__(self, attr):
return getattr(self._stream, attr)
def readline(self, size=None):
try:
return self._stream.readline(size)
except KeyboardInterrupt:
raise KeyboardInterrupt, "Operation Cancelled"
def write(self, message):
conn = self.conn()
conn.async_request(self.HANDLE_CALL, self.oid, (message,), {})
while len(conn._async_callbacks) > 100 :
conn.serve()
def asyncIO():
import sys
sys.stdin = AsyncStream(sys.stdin)
sys.stdout = AsyncStream(sys.stdout)
sys.stderr = AsyncStream(sys.stderr)
asyncIO()
"""
import rpyc
from rpyc.utils.classic import redirected_stdio
c = rpyc.classic.connect("localhost")
redirect = redirected_stdio(c)
try:
c.execute(source)
c.execute("for i in range(10): print i")
finally:
redirect.restore()
c.close()
The output is
9
8
7
...
instead of
0
1
2
3
...
This used to work as expected in rpyc 2.6. I have tried hard to
understand where the reversal occurs at no avail. Any ideas?
Hi,
I don't know if anybody has reported this.
There is a bug in utils/server.py that prevents the forking server from running.
The bug is in line 209 and can be easily corrected.
It involves ForkingServer.def _handle_sigchld():
@staticmethod
def _handle_sigchld(signum, unused):
try:
while True:
os.waitpid(-1, os.WNOHANG)
except OSError:
pass
# re-register signal handler (see man signal(2), under Portability)
signal.signal(signal.SIGCHLD, self._handle_sigchld)
The last line uses self, but in a static method self is not available.
Therefore self must be replaced by the class name (ForkingServer).
So the correct version of ForkingServer.def _handle_sigchld() is:
@staticmethod
def _handle_sigchld(signum, unused):
try:
while True:
os.waitpid(-1, os.WNOHANG)
except OSError:
pass
# re-register signal handler (see man signal(2), under Portability)
signal.signal(signal.SIGCHLD, ForkingServer._handle_sigchld)
mman Show activity 1/21/10
Hello all,
First of all thanks to tomer and to other contributors for this
extremely useful module.
I have found that enabling TCP_NODELAY flag on rpyc sockets reduces
the delay in the
communication between RPYC clients and RPYC services, further
improving the performance
of RPYC-based applications (that was my case).
I guess it does not harm to enable this flag by default in RPYC (see 1-
line patch below).
Regards,
Michael
--- orig/rpyc/core/stream.py 2009-09-22 14:38:28.000000000 +0300
+++ my/rpyc/core/stream.py 2009-12-16 21:27:49.000000000 +0200
@@ -65,6 +65,7 @@
s = socket.socket(family, type, proto)
s.settimeout(timeout)
s.connect((host, port))
+ s.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
return s
@classmethod
def connect(cls, host, port, **kwargs):
https://groups.google.com/forum/#!topic/rpyc/ejt5VDoKZDk
maybe add a configurable sleep?
PyScripter Show activity 12/28/09
Some analysis suggests the following:
The problem occurs in Connection._unbox:
cls = getattr(obj, "class", type(obj))
wx.cvar.class raises a NameError and not an AttributeError as it
should (this is a bug on wx's part). As a result _unbox fails.
The workaround which would make rpyc more forgiving to such bugs would
be to replace the getattr statement with the following:
try:
cls = obj.__class__
except:
cls = type(obj)
Could you please implement this small change?
Jerome Delattre
and improve setup.py
I got a crash report from a PyScripter user in which he used the following statement:
numpy_slice = slice(numpy.int32(2),None,None)
This crashed the rpyc server.
Although this is a corner case I am told that numpy users do such things! I suggest the following changes to brine,py.
a) Remove slice from simple types
b) modify dumpable as follows
def dumpable(obj):
"""Indicates whether the given object is *dumpable* by brine
:returns: ``True`` if the object is dumpable (e.g., dumps would succeed),
``False`` otherwise
"""
if type(obj) in simple_types:
return True
if type(obj) in (tuple, frozenset):
return all(dumpable(item) for item in obj)
if type(obj) == slice:
return dumpable(obj.start) and dumpable(obj.step) and dumpable(obj.stop)
return False
Hi,
I found a problem while making a lot of sequence connections to rpyc.utils.server.ThreadedServer with SSL Authenticator.
The scenario is following:
I'm creating interface object, which connect to rpyc Service and provide send method to that service.
Every call to send method makes reconnecting on interface object.
After 1017 reconnections error occures ("ssl.SSLError: [Errno 8] _ssl.c:499: EOF occurred in violation of protocol" or " ssl.SSLError: [Errno 185090050] _ssl.c:336: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib"
To present this scenario pleas find attached files in: https://gist.github.com/1040068
There are:
In the load function of vinegar.py I see the following code:
if instantiate_custom_exceptions:
if modname in sys.modules:
cls = getattr(sys.modules[modname], clsname, None)
elif not is_py3k and modname == "builtins":
cls = getattr(exceptions_module, clsname, None)
else:
cls = None
The second if statement looks suspicious because it will never be true.
Furter below in the same function I see the following code:
if not isinstance(cls, (type, ClassType)):
cls = None
elif issubclass(cls, ClassType) and not instantiate_oldstyle_exceptions:
cls = None
elif not issubclass(cls, BaseException):
cls = None
In python 3k ClassType will be pointing to type and issubclass will always return True and if it is combined with instantiate_oldstyle_exceptions set to True will give the unwantd result cls = None
Antoine
is it possible to expose a class method like this:
class AService(rpyc.Service):
class exposed_A(A):
@classmethod
def exposed_initialize(cls, *args, **kwargs):
return cls.initialize(*args, **kwargs)
I get this error:
/Volumes/DATA/Users/dechaume/Codes/pod/jpod/src/rpyc/core/vinegar.py:
42: DeprecationWarning: BaseException.message has been deprecated as
of Python 2.6
attrval = getattr(val, name)
======= Remote traceback =======
Traceback (most recent call last):
File "/Volumes/DATA/Users/dechaume/Codes/pod/jpod/src/rpyc/core/
protocol.py", line 223, in _dispatch_request
res = self._HANDLERS[handler](self, *args)
File "/Volumes/DATA/Users/dechaume/Codes/pod/jpod/src/rpyc/core/
protocol.py", line 432, in _handle_getattr
return self._access_attr(oid, name, (), "_rpyc_getattr",
"allow_getattr", getattr)
File "/Volumes/DATA/Users/dechaume/Codes/pod/jpod/src/rpyc/core/
protocol.py", line 395, in _access_attr
raise AttributeError("cannot access %r" % (name,))
AttributeError: cannot access 'get'
-----------------------
While trying a workaround, I found something that looks like a bug :
standard python methods cannot be applied on a dictionary passed to a
service function.
Try this example, based on the one at http://sebulbasvn.googlecode.com/svn/trunk/rpyc/demos/time/,
with the following modifications:
time_service.py:
import time
from rpyc import Service
class TimeService(Service):
def exposed_get_utc(self):
return time.time()
def exposed_get_time(self):
return time.ctime()
def exposed_dico(self, dico):
return dico.get('key')
client.py:
import rpyc
c = rpyc.connect_by_service("TIME")
print "server's time is", c.root.get_time()
dico = {'key':0}
print "dico key =", c.root.dico(dico)
Is it a bug or am I missing something?
Hi,
I would like to propose a little saver authenticator which makes it
very hard to brute-force the login for a known user. The idea is to
recognize unsuccessful logins and block them even if the attacker
guesses the right password eventually. This works only nice in
situations when the number of valid users is small since we keep an
access-db in memory for all known users. If one has to handle
thousands of users, one can add a purge and re-create method for the
access-db. Using vdb might not be clever in this situation, anyhow.
For all known users the access-db memories a count for the number of
unsuccessful tries and the time of the last unsuccessful try. In case
of a successful login by the TLS handshake, it is then checked if more
than max_retry (3) unsuccessful tries happened in the recent past
(about 4 to 20 min. depending on the number of tries) and if there was
a recent try then the connection will be refused even if the TLS
handshake was successful.
A complete successful login resets the try-count.
The weakness of this approach is that an attacker who guesses the user
name correctly can block a valid user and prevent him from using the
service simply by issuing a try of a false password every minute. This
might be a problems in situations when the server is used by an
automatic system. One should keep the user name secret and one should
have several alternating user names, which are automatically switched
by the client if a login fails.
Alternatively the option bypass_known_ip can be activated, but this
weakens the security because the attacker can brute-force the login as
soon as he can fake an IP address which bypasses the logic.
Greetings
Ruediger
class AccessDbAuthenticator(VdbAuthenticator):
wait_time=60.0 # 1 min * number of non successful tries
max_wait_time=600.0 # 10 min.
valid_retries=3 # allow 3 valid retries
bypass_known_ip=False
def __init__(self,vdb):
VdbAuthenticator.__init__(self,vdb)
self.lastaccess={}
self.users=[]
self.update_accessdb()
@classmethod
def from_dict(cls, users):
inst = cls(tlsapi.VerifierDB())
for username, password in users.iteritems():
inst.set_user(username, password)
inst.update_accessdb()
return inst
def update_accessdb(self):
self.users=self.list_users()
self.lastaccess={}
for u in self.users:
self.lastaccess[u]={'last_try':None, 'no_success':0,'success_ip':[]}
return
def set_user(self, username, password):
VdbAuthenticator.set_user(self, username, password)
self.update_accessdb()
def del_user(self, username):
VdbAuthenticator.del_user(self, username)
self.update_accessdb()
def __call__(self, sock):
h, p = sock.getpeername()
sock2 = tlsapi.TLSConnection(sock)
sock2.fileno = lambda fd=sock.fileno(): fd # tlslite omitted fileno
try:
sock2.handshakeServer(verifierDB = self.vdb)
except Exception:
if sock2.allegedSrpUsername!='':
tries=''
if sock2.allegedSrpUsername in self.users:
self.lastaccess[sock2.allegedSrpUsername]['last_try']=time.clock()
self.lastaccess[sock2.allegedSrpUsername]['no_success']+=1
tries="(%s tries)" % str(self.lastaccess[sock2.allegedSrpUsername]'no_success'])
raise AuthenticationError("Bad try for user %s %s" % (sock2.allegedSrpUsername,tries))
LA=self.lastaccess[sock2.allegedSrpUsername]
if (self.bypass_known_ip and not (h in LA['success_ip'])) and (LA['no_success']>self.valid_retries) and (time.clock() - LA['last_try']
There is an uninitialised variable e
in core/protocol.py around line 316.
diff --git a/rpyc/core/protocol.py b/rpyc/core/protocol.py
index dfb3ef2..4d85e0b 100644
--- a/rpyc/core/protocol.py
+++ b/rpyc/core/protocol.py
@@ -316,7 +316,7 @@ class Connection(object):
self.serve(0.1)
except select.error:
if not self.closed:
- raise e
+ raise
except EOFError:
pass
finally:
we need to decide on format first
I'll suggest:
reStructuredText (http://docutils.sourceforge.net/rst.html)
and
Sphinx (http://sphinx.pocoo.org/) for automatic generation
Tomer, can you write a short summery for each internal module ?
and then I'll write it as reStructuredText and put it in to create the basic script that will auto generate the documentation.
Thomas Higdon Show activity 9/23/10
Yep, we agree that pipes on windows suck, but thanks for attempting to
support it!
I ran into the following problem on python 2.4, windows server 2008:
File "/tmp/tmpt9WdVU", line 28, in ?
File "c:\tmp\tmp9v5tb5\rpyc\utils\classic.py", line 28, in connect_pipes
File "c:\tmp\tmp9v5tb5\rpyc\utils\factory.py", line 38, in connect_pipes
File "c:\tmp\tmp9v5tb5\rpyc\utils\factory.py", line 30, in connect_stream
File "c:\tmp\tmp9v5tb5\rpyc\utils\factory.py", line 23, in connect_channel
File "c:\tmp\tmp9v5tb5\rpyc\core\protocol.py", line 87, in __init__
File "c:\tmp\tmp9v5tb5\rpyc\core\protocol.py", line 90, in _init_service
File "c:\tmp\tmp9v5tb5\rpyc\core\service.py", line 106, in on_connect
File "c:\tmp\tmp9v5tb5\rpyc\core\protocol.py", line 365, in root
File "c:\tmp\tmp9v5tb5\rpyc\core\protocol.py", line 339, in sync_request
File "c:\tmp\tmp9v5tb5\rpyc\core\protocol.py", line 301, in serve
File "c:\tmp\tmp9v5tb5\rpyc\core\protocol.py", line 261, in _recv
File "c:\tmp\tmp9v5tb5\rpyc\core\channel.py", line 38, in recv
File "c:\tmp\tmp9v5tb5\rpyc\core\stream.py", line 215, in read
TypeError: Second param must be an integer or a buffer object
It turns out there is a very simple fix, although I'm not completely
certain why it seems no one else has run into this.
--- stream.py.bak 2010-09-22 12:33:19.218556542 -0400
+++ stream.py 2010-09-22 16:57:29.108371675 -0400
@@ -211,7 +211,7 @@
try:
data = []
while count > 0:
- dummy, buf = win32file.ReadFile(self.incoming, min(self.MAX_IO_CHUNK, count))
+ dummy, buf = win32file.ReadFile(self.incoming, int(min(self.MAX_IO_CHUNK, count)))
count -= len(buf)
data.append(buf)
except TypeError, ex:
It looks like the result of the 'min' function is not something that
the C layer of win32file interprets as an integer. count comes from
the FRAME_HEADER struct in channel.py, which is using the 'L', or
unsigned long format, which I guess doesn't count as an integer.
Anyways, the maintainer may put this into the next release, but I hope
this can help someone else in the meantime.
All source files are in dos format (line endings with CR/LF). On Solaris the scripts were not executable due to this after installing RPyC. The files had to be converted to unix format (LF line endings) explicitly.
Would recommend storing all files in unix format since it works everywhere without any issue.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.