leonrado / innodb-tools Goto Github PK
View Code? Open in Web Editor NEWAutomatically exported from code.google.com/p/innodb-tools
Automatically exported from code.google.com/p/innodb-tools
What steps will reproduce the problem?
1. wget http://innodb-tools.googlecode.com/files/innodb-recovery-0.3.tar.bz2
2. bunzip2 innodb-recovery-0.3.tar.bz2
3. Message: bunzip2: innodb-recovery-0.3.tar.bz2 is not a bzip2 file.
What is the expected output? What do you see instead?
- the file should unzip
What version of the product are you using? On what operating system?
0.3 on fedora linux
Please provide any additional information below.
The file is not a bzip file
Original issue reported on code.google.com by [email protected]
on 28 Apr 2008 at 1:21
I needed some changes to treat DECIMAL by innodb-recovery-0.3.tar.gz
In the output of create_defs.pl,
I think the (NEW)DECIMAL field has not enough information.
1. type: FT_CHAR should be FT_DECIMAL
2. decimal_precision: and decimal_digits:
In addition, the space padded output may be verbose in some cases.
So, I have changed print_data.c as,
bin2decimal((char*)value, &dec, field->decimal_precision,
field->decimal_digits);
// decimal2string(&dec, string_buf, &len, field->decimal_precision,
field->decimal_digits, ' ');
decimal2string(&dec, string_buf, &len, 0,0,0);
print_string(string_buf, len, field);
Best regards,
Yasufumi
Original issue reported on code.google.com by [email protected]
on 16 Dec 2008 at 4:58
What steps will reproduce the problem?
Run create_defs.pl on a database with tables that have any BLOB type.
What version of the product are you using? On what operating system?
Version innodb-recovery-0.3
The fix seemed simple enough. I added BLOB support to create_defs.pl. It
runs OK and generates a table_defs.h file with BLOBs defined. This
table_defs.h header compiles OK with tables_dict.c. After doing all this I
was able to run page_parser and constraints_parser on a database with many
BLOB fields... What I don't know is if actually works end-to-end. I had
various other problems with my corrupt ibdata1 file and I was unable to
recover any useful data. So, while my patch may product running code, I am
not sure if my fix actually works. But it seems straightforward enough for
me to let you decide if it should work or not.
I have enclosed a patch file and I have tried to include it inline here:
<pre>
# diff -c innodb-recovery-0.3/create_defs.pl
innodb-recovery-0.3.noah/create_defs.pl
*** innodb-recovery-0.3/create_defs.pl 2008-04-01 18:58:03.000000000 -0700
--- innodb-recovery-0.3.noah/create_defs.pl 2008-08-01
02:37:07.000000000 -0700
***************
*** 363,367 ****
--- 363,383 ----
return { type => 'FT_CHAR', fixed_len => $len_bytes };
}
+ if ($type =~ /^TINYBLOB$/i) {
+ return { type => 'FT_BLOB', min_len => 0, max_len => 255 };
+ }
+
+ if ($type =~ /^BLOB$/i) {
+ return { type => 'FT_BLOB', min_len => 0, max_len => 65535 };
+ }
+
+ if ($type =~ /^MEDIUMBLOB$/i) {
+ return { type => 'FT_BLOB', min_len => 0, max_len =>
16777215 };
+ }
+
+ if ($type =~ /^LONGBLOB$/i) {
+ return { type => 'FT_BLOB', min_len => 0, max_len =>
4294967295 };
+ }
+
die "Unsupported type: $type!\n";
}
</pre>
Original issue reported on code.google.com by noah%[email protected]
on 4 Aug 2008 at 6:54
Attachments:
What steps will reproduce the problem?
1. No problem just a design improvement?
What is the expected output? What do you see instead?
no difference
What version of the product are you using? On what operating system?
openark-kit-170
Please provide any additional information below.
A trigger is used:
CREATE TRIGGER %s.%s AFTER UPDATE ON %s.%s
FOR EACH ROW
BEGIN
DELETE FROM %s.%s WHERE (%s) = (%s);
REPLACE INTO %s.%s (%s) VALUES (%s);
END
Imho there is no need for the DELETE
Regards
Erkan
Original issue reported on code.google.com by [email protected]
on 7 Mar 2011 at 4:29
--------
What steps will reproduce the problem?
--------
Configure, compile & run constraints_parser on an x86_64 platform and no
rows containing a field that is null will be recovered. Errors will be
similar to:
Invalid offset for field 12: 2147483701
--------
What is the expected output? What do you see instead?
--------
in constraints_parser.c:ibrec_init_offsets_old(), the following lines are
executed:
offs |= REC_OFFS_SQL_NULL; (lines 261, 279)
and
offs |= REC_OFFS_EXTERNAL; (line 284)
The problem is that REC_OFFS_SQL_NULL and REC_OFFS_EXTERNAL are set to (1
<< 31) when ulint is in fact 64 bit and should be (1 << 63) instead.
As a workaround, replacing these 3 lines with
offs |= REC_OFFS_SQL_NULL << 32;
and
offs |= REC_OFFS_EXTERNAL << 32;
fixes it.
(As a wild guess this may be a problem with configure, rather than the
header files themselves)
--------
What version of the product are you using? On what operating system?
--------
uname -a
Linux <hostname> 2.6.18-8.1.8.el5 #1 SMP Mon Jun 25 17:06:07 EDT 2007
x86_64 x86_64 x86_64 GNU/Linux
cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.1 (Tikanga)
Original issue reported on code.google.com by [email protected]
on 22 Feb 2008 at 12:42
process_ibfile assumes that partial reads (returns less than the requested
amount of data) indicates an error. That is not an error and more likely
when reading from NFS.
------
The code from process_ibfile is:
// Read pages to the end of file
while ((read_bytes = read(fn, page, UNIV_PAGE_SIZE)) ==
UNIV_PAGE_SIZE) {
------
The reads should be retried until -1 is returned. From the man page:
On success, the number of bytes read is returned (zero indicates end
of file), and the file position is advanced by this number. It is not an
error if this number is smaller than
the number of bytes requested; this may happen for example because
fewer bytes are actually available right now (maybe because we were close
to end-of-file, or because we are
reading from a pipe, or from a terminal), or because read() was
interrupted by a signal. On error, -1 is returned, and errno is set
appropriately. In this case it is left unspec-
ified whether the file position (if any) changes.
Original issue reported on code.google.com by [email protected]
on 8 Oct 2009 at 4:03
What steps will reproduce the problem?
Run create_defs.pl for this table:
create table tz_bug (id bigint, wid int unsigned) engine=innodb;
This does not have the problem:
create table tz_ok (id bigint, wi int unsigned) engine=innodb;
The current regex used will make tz_bug.id use 'unsigned'
Output with the bug:
{ /* bigint(20) */
name: "id",
type: FT_UINT,
fixed_length: 8,
Expected output:
{ /* bigint(20) */
name: "id",
type: FT_INT,
fixed_length: 8,
Please provide any additional information below.
create_defs.pl has a bug that makes it use 'unsigned' for columns when
the column name is a substring of another column in the table and the
other column is unsigned.
This patch for IsFieldUnsigned fixes the problem.
250c250
< return ($row->[1] =~ /$field[^,]*unsigned/i);
---
> return ($row->[1] =~ /`$field`[^,]*unsigned/i);
Original issue reported on code.google.com by [email protected]
on 8 Oct 2009 at 3:57
constraints_parser is not friendly towards input from a raw device. It
reads from the file 16kb a time. InnoDB pages are 16kb aligned when read
from a file. But when read from a raw device they are only guaranteed to be
aligned to the file system block size (4kb on Linux for me).
Many rows will be missed when this is done because constraints_parser uses
a 16kb buffer to search for rows. When this is split over 2 InnoDB pages,
rows will be missed.
One way to fix this is to use a larger read size so that rows will be still
be lost but less frequently.
Another way to fix this is add a flag for constraints_parser to only use
pages with valid checksums. And then advance the file offset by 4kb at a
time until a valid page is found.
Original issue reported on code.google.com by [email protected]
on 8 Oct 2009 at 4:09
What steps will reproduce the problem?
1. create table with a SET type
2. run create_defs.pl on that table
What is the expected output?
- Unsupported type: set('not_null'...)!
What do you see instead?
- valid tables_defs.h
What version of the product are you using? On what operating system?
- innodb-recovery-0.3
Please provide any additional information below.
patch to support 'set' is about of 119 rows and i prefer not to paste it
inline. i was able to recover a table with 13-members set (about of 3M
records). though it hasn't been tested extensively.
Original issue reported on code.google.com by [email protected]
on 16 Feb 2010 at 2:00
Attachments:
create_defs.pl does not suppoer varbinary datatype.
What definition I need use for recovery varbinary(40) data ?
..
I create definition manually :
{ /* varbinary(40) */
name: "info_hash",
type: FT_BLOB,
min_length: 0,
max_length: 65535,
can_be_null: FALSE
},
but this crash conrstraint_parser :
Initializing table definitions...
Processing table: a
- total fields: 39
- nullable fields: 2
- minimum header size: 46
- minimum rec size: 106
- maximum rec size: 525181
Read data from fn=3...
Page id: 0
Starting offset: 6. Checking 1 table definitions.
.....
Checking offset: 51: (a)
Checking offset: 52: (a) ORIGIN=OK DELETED=OK OFFSETS=OK Segmentation fault
Original issue reported on code.google.com by [email protected]
on 21 Mar 2009 at 10:31
print_data has this for unsigned ints, and the case for mediumint is
incorrect. This code clears the most signficant bit. That should not be done.
switch (field->fixed_length) {
case 1: return mach_read_from_1(value);
case 2: return mach_read_from_2(value);
case 3: return mach_read_from_3(value) & 0x3FFFFFUL;
The code for signed bits does not support negative values. It clears the
most signficant bit, which is correct to do for positive values. For
negative values the bit should be flipped from 0 to 1. I think the code has
other problems for negative values from sign extension.
Something like this handled negative ints:
ulint ur = mach_read_from_3(value);
ulint b0 = ur & 0xff;
ulint b1 = (ur >> 8) & 0xff;
ulint b2 = ((ur >> 16) & 0xff) ^ 0x80;
ulint r = b0 + (b1 << 8) + (b2 << 16);
if (r > 0x7fffff) {
return -((0xffffff - r) + 1);
} else {
return r;
}
Original issue reported on code.google.com by [email protected]
on 11 Oct 2009 at 10:56
What steps will reproduce the problem?
1. Excute the create_defs.pl command
2. The parameters are --db=test --table=movi
3. The problem: the "table_defs.h" file always have the same values
#ifndef table_defs_h
#define table_defs_h
// Table definitions
table_def_t table_definitions[] = {
};
#endif
I execute many times de "create_defs.pl" and always have the same bad result.
What is the expected output? What do you see instead?
A "table_defs.h" file with de info of my table
What version of the product are you using? On what operating system?
innodb-recovery-0.3 on Debian Sarge 3.01
Please provide any additional information below.
My table is INNODB
I use Mysql-Server 4.0.24
Original issue reported on code.google.com by [email protected]
on 18 Feb 2010 at 8:10
What steps will reproduce the problem?
1. How I import the pages, it's not SQL
and the split_dump do nothink, it's possible to make
an SQL_dump with this Files must be perfect ;-)
2.
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
Original issue reported on code.google.com by [email protected]
on 22 Jan 2008 at 9:15
open_ibfile assumes open() == 0 is an error. The correct check is open() < 0.
int open_ibfile(char *fname) {
struct stat fstat;
int fn;
...
fn = open(fname, O_RDONLY, 0);
if (!fn) error("Can't open file!");
Original issue reported on code.google.com by [email protected]
on 8 Oct 2009 at 3:59
What steps will reproduce the problem?
1. make
2. ./page_parser
3. ./constraints_parser with debug mode on (-V)
What is the expected output? What do you see instead?
Table data is expected as the output. However, it shows up a message that
says, "Page is in REDUNDANT format while we're looking for COMPACT - skipping"
What version of the product are you using? On what operating system?
innodb-recovery-0.3 on Fedora Core 10 x86 system.
Please provide any additional information below.
I'm trying to recover my database which was created long ago. The .frm
files are not so good, but I have a dump of the table structure and
triggers. So I created an empty database for "create_defs.pl" to pick the
table definitions. After generating the pages and then running
constraints_parser, it flashed the above message. Earlier, I ignored the
fact that newer MySQL versions create InnoDB tables with ROW_FORMAT=COMPACT
by default. Then I dropped and imported the database structure again, this
time, with ROW_FORMAT=REDUNDANT. But still, the same message has come up. I
have no clue what exactly does the parser expect my table structure to be.
Please provide a solution for this.
Original issue reported on code.google.com by [email protected]
on 11 Jun 2009 at 10:18
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.