tailhook / dns-parser Goto Github PK
View Code? Open in Web Editor NEWThe parser of DNS protocol packets in rust
License: Apache License 2.0
The parser of DNS protocol packets in rust
License: Apache License 2.0
Hi,
I was just doing some browsing and came across this crate; and I spotted some odds and ends that may not be quite right, or could be improved.
u32
. But RFC 1035 says that the TTL is signed, so this should be i32
. RFC2181 section 8 insists that this is so:It is hereby specified that a TTL value is an unsigned number, with a minimum value of 0, and a
maximum value of 2147483647... Implementations should treat TTL values received with the
most significant bit set as if the entire value received was zero.
Packet::parse()
can, so should, use Vec::with_capacity()
for both questions
and answers
.I hope this is helpful rather than otherwise; if not, please feel free just to close.
Hello,
I have been working on a DNS over HTTPS proxy DNS server, as an excuse to learn Rust: https://github.com/detro/mooncell. Because I did not want to engage in solving too many problems at once, given my Rust newbie-ness, I decided to leverage https://github.com/bluejekyll/trust-dns. Unfortunately this has turned out to be a really bad call, as the mix of Tokio, Hyper and Trust-DNS isn't really working. Especially because Trust-DNS has some design issues that don't make it easy to leverage some of it's parts.
So I have built the whole DoH-over-HTTPS part, but I'm now trying to build a simple enough solution to receive DNS queries (both TCP and UDP) and send back a response.
This project seems ideal, as it has a very limited amount of dependencies and it focuses exclusively on DNS Packet parsing.
Can I ask what are you planning to use this for? Just "professional" curiosity...
Thanks again for this
This library here
Lines 75 to 77 in 1912667
LabelIsNotAscii
error when it encounters queries or answers with label or hostnames containing utf8 characters.
With "normal" DNS queries, utf8 characters are converted to puny code. Multicast DNS queries have the exact same structure except they allow ascii or utf8 characters.[1]
I don't know if it's within the scope of this project. If it is, I believe that line should be changed to allow any utf8 encoded string.
[1] https://datatracker.ietf.org/doc/html/rfc6762#appendix-F
Hello,
I was wondering if it would be possible to make a new release with the most recent commits. Thanks
Any unknown query types are being parsed as A records. For example dig -t GARBAGE
is parsed as a request for an A record.
How do I serialize it back?
I want the library to be able to round-trip DNS packet from bytes to structured representation and back.
New to Rust so this may not be necessary but I'm trying to build a cache and want to use ResourceRecord as a HashMap key. This requires it to have Hash, Eq, PartialEq traits. Can I impl these externally or is it better for dns-parser to derive these with struct definition?
As far as I understand there can't be multiple chunks in a single RDATA field.
There can be multiple ResourceRecords though, and when there are multiple the app should consider them as a single concatenated resource record.
Since generally, we don't try to join answers we should return a borrowed slice and remove all the concatenation code as far as I understand.
Putting this here so it doesn't get lost.
My dig command generates DNS packets that has the AD bit set (the 11th bit in flags) as Wireshark shows to me. But dns-parser thinks that bit should be zero because it's reserved.
I think reserved data should be stored as-is, without checked, as it may have meaning in the future.
The parser insists that the content of a valid TXT record be UTF8, but it ain't necessarily so.
The value is opaque binary data. Often the value for a particular attribute will be US-ASCII [RFC20] or UTF-8 [RFC3629] text, but it is legal for a value to be any binary data.
Here's an example found in the wild:
$ dig @8.8.8.8 like.com.sa TXT
; <<>> DiG 9.9.4-RedHat-9.9.4-51.el7 <<>> @8.8.8.8 like.com.sa TXT
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15148
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;like.com.sa. IN TXT
;; ANSWER SECTION:
like.com.sa. 14399 IN TXT "v=spf1 ip4:70.38.11.53 +a +mx +ip4:\184k\180c ?all"
;; Query time: 300 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Mon Oct 23 08:35:50 BST 2017
;; MSG SIZE rcvd: 97
There's a new(?) qtype of 'HTTPS' (code is 65) that causes this great package to give up on parsing the packet. I'm happy to put up a PR to fix it, but it would be great to know if this project was still alive or if there's a better replacement I should be looking at. The code here hasn't been touched in a while.
Please let me know and thank you.
After parsing a packet and looping through the resource records in the answers, I'm trying to access the Rdata as a vector of [u8] bytes. But being new to Rust, I can't figure this out.
RData::A(Record(ip))
Gives me an IPv4Addr and I can do similar things for other record types but I just want the raw [u8] for caching and duplicate detection.
This library concatenates all strings in a TXT record. Many (probably most) applications treat the records individually.
See for example:
http://www.zeroconf.org/rendezvous/txtrecords.html
The thing with concatenating TXT records seem to come from some weird email RFC (SPF?) but I can't find anything about it in the more generic mdns RFCs. Anyway I think it is better to let applications interpret the records themselves.
Hello,
First, thanks for creating this!
I have two questions, I was hoping you could answer.
First, I was wondering if there was a reason you hadn't implemented nameservers
yet. It looks like it's trivially easy since the parsing is the same as parsing Answer
s, at least it seems that way to me. I did that and it worked. Is there some edge case I'm missing?
My other more important question is do you think there is a good way in the Name
struct to store the display version of the name (ie a vec of www.google.com instead of the label
slice which would be the offset value to www.google.com)?
I was able to get it work by adding a translated
field to Name
which is a Vec<u8>
but to do so, I had to remove the Copy
trait in the derive
for Name
, and I'm not sure you're ok with that. And I update the translated
variable in the scan
function where you validate the offset.
I could just convert the label into a string since you have it implemented, but I don't like the idea of having to parse the data again to convert it when we've already done the offset parsing during the validation.
Do you have thoughts on this?
I can also push up my branch, if you'd like to take a look.
Thanks!
Hello,
Could you please make a new release with the updated additional
record parsing? Thank you!.
Trying to cache results and using RData abstraction to get type is not obvious. Need a way to get a u16 rrtype.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.