从shell将多个.sql转储文件导入mysql数据库

时间:2021-12-12 02:46:41

I have a directory with a bunch of .sql files that mysql dumps of each database on my server.

我有一个包含许多.sql文件的目录,mysql将每个数据库转储到我的服务器上。

e.g.

如。

database1-2011-01-15.sql
database2-2011-01-15.sql
...

There are quite a lot of them actually.

实际上有很多。

I need to create a shell script or single line probably that will import each database.

我需要创建一个shell脚本或一行,可能会导入每个数据库。

I'm running on a Linux Debian machine.

我在Linux Debian机器上运行。

I thinking there is some way to pipe in the results of a ls into some find command or something..

我认为,在ls的结果中,有一些方法可以用于查找命令或某些东西。

any help and education is much appreciated.

非常感谢您的帮助和教育。

EDIT

编辑

So ultimately I want to automatically import one file at a time into the database.

所以最终我想要自动地一次导入一个文件到数据库中。

E.g. if I did it manually on one it would be:

例如:if I did it on one, it would be:

mysql -u root -ppassword < database1-2011-01-15.sql

5 个解决方案

#1


74  

cat *.sql | mysql? Do you need them in any specific order?

猫*。sql | mysql ?你是否需要它们的任何特定顺序?

If you have too many to handle this way, then try something like:

如果你有太多的事情要处理,那么试试以下方法:

find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch

This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.

这也解决了通过管道传递脚本输入的一些问题,尽管您不应该对Linux下的管道处理有任何问题。这种方法的好处是mysql实用程序读取每个文件,而不是让它从stdin中读取。

#2


17  

One-liner to read in all .sql files and imports them:

读取所有.sql文件并导入它们的一行代码:

for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done

The only trick is the bash substring replacement to strip out the .sql to get the database name.

惟一的诀窍是替换bash子字符串,去掉.sql以获得数据库名称。

#3


7  

There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:

在http://kedar.nitty-witty.com/blog/mydumpsplitter-extract- tabls-from- mysql-dump-shell-script中有一个非常棒的小脚本,它将获取一个巨大的mysqldump文件,并将其分割为每个表的一个文件。然后您可以运行这个非常简单的脚本,从这些文件加载数据库:

for i in *.sql
do
  echo "file=$i"
  mysql -u admin_privileged_user --password=whatever your_database_here < $i
done

mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.

mydumpsplitter甚至可以在.gz文件上工作,但是它比先使用gunzipping慢得多,然后在未压缩的文件上运行它。

I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.

我说的很大,但我想一切都是相对的。我花了大约6-8分钟才将一个2000个表、200MB的转储文件分割开来。

#4


3  

I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.

我不久前创建了一个脚本,就是为了实现这个目的,我将其称为(完全无创造性的)“myload”。它将SQL文件加载到MySQL中。

Here it is on GitHub

在GitHub上

It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.

这是简单而直接的;允许您指定mysql连接参数,并即时解压gzip'ed sql文件。它假定每个数据库都有一个文件,文件名的基础是所需的数据库名。

So:

所以:

myload foo.sql bar.sql.gz

Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.

将创建(如果不存在)名为“foo”和“bar”的数据库,并将sql文件导入到每个数据库中。

For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).

对于流程的另一端,我编写了这个脚本(mydumpall),它为每个数据库(或通过名称或regex指定的某个子集)创建相应的sql(或sql.gz)文件。

#5


2  

I don't remember the syntax of mysqldump but it will be something like this

我不记得mysqldump的语法了,它是这样的

 find . -name '*.sql'|xargs mysql ...

#1


74  

cat *.sql | mysql? Do you need them in any specific order?

猫*。sql | mysql ?你是否需要它们的任何特定顺序?

If you have too many to handle this way, then try something like:

如果你有太多的事情要处理,那么试试以下方法:

find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch

This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.

这也解决了通过管道传递脚本输入的一些问题,尽管您不应该对Linux下的管道处理有任何问题。这种方法的好处是mysql实用程序读取每个文件,而不是让它从stdin中读取。

#2


17  

One-liner to read in all .sql files and imports them:

读取所有.sql文件并导入它们的一行代码:

for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done

The only trick is the bash substring replacement to strip out the .sql to get the database name.

惟一的诀窍是替换bash子字符串,去掉.sql以获得数据库名称。

#3


7  

There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:

在http://kedar.nitty-witty.com/blog/mydumpsplitter-extract- tabls-from- mysql-dump-shell-script中有一个非常棒的小脚本,它将获取一个巨大的mysqldump文件,并将其分割为每个表的一个文件。然后您可以运行这个非常简单的脚本,从这些文件加载数据库:

for i in *.sql
do
  echo "file=$i"
  mysql -u admin_privileged_user --password=whatever your_database_here < $i
done

mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.

mydumpsplitter甚至可以在.gz文件上工作,但是它比先使用gunzipping慢得多,然后在未压缩的文件上运行它。

I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.

我说的很大,但我想一切都是相对的。我花了大约6-8分钟才将一个2000个表、200MB的转储文件分割开来。

#4


3  

I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.

我不久前创建了一个脚本,就是为了实现这个目的,我将其称为(完全无创造性的)“myload”。它将SQL文件加载到MySQL中。

Here it is on GitHub

在GitHub上

It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.

这是简单而直接的;允许您指定mysql连接参数,并即时解压gzip'ed sql文件。它假定每个数据库都有一个文件,文件名的基础是所需的数据库名。

So:

所以:

myload foo.sql bar.sql.gz

Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.

将创建(如果不存在)名为“foo”和“bar”的数据库,并将sql文件导入到每个数据库中。

For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).

对于流程的另一端,我编写了这个脚本(mydumpall),它为每个数据库(或通过名称或regex指定的某个子集)创建相应的sql(或sql.gz)文件。

#5


2  

I don't remember the syntax of mysqldump but it will be something like this

我不记得mysqldump的语法了,它是这样的

 find . -name '*.sql'|xargs mysql ...