如何创建1000个可以用来测试脚本的文件?

时间:2021-01-04 01:23:32

I would like to create 1000+ text files with some text to test a script, how to create this much if text files at a go using shell script or Perl. Please could anyone help me.

我想用一些文本创建1000多个文本文件来测试脚本,如何使用shell脚本或Perl一次性创建这么多的文本文件。请大家帮帮我。

9 个解决方案

#1


8  

for i in {0001..1000}
do
  echo "some text" > "file_${i}.txt"
done

or if you want to use Python <2.6

或者如果您想使用Python <2.6

for x in range(1000):
    open("file%03d.txt" % x,"w").write("some text")

#2


6  

#!/bin/bash
seq 1 1000 | split -l 1 -a 3 -d - file

Above will create 1000 files with each file having a number from 1 to 1000. The files will be named file000 ... file999

上面将创建1000个文件,每个文件的编号从1到1000。这些文件将被命名为file000…file999

#3


4  

In Perl:

在Perl中:

use strict;
use warnings;

for my $i (1..1000) {
   open(my $out,">",sprintf("file%04d",$i));
   print $out "some text\n";
   close $out;
}

Why the first 2 lines? Because they are good practice so I use them even in 1-shot programs like these.

为什么是前两行?因为它们是很好的练习,所以我在这样的单镜头程序中也会用到它们。

Regards, Offer

问候,提供

#4


3  

For variety:

品种:

#!/usr/bin/perl

use strict; use warnings;
use File::Slurp;

write_file $_, "$_\n" for map sprintf('file%04d.txt', $_), 1 .. 1000;

#5


2  

#!/bin/bash

for suf in $(seq -w 1000)
do
        cat << EOF > myfile.$suf
        this is my text file
        there are many like it
        but this one is mine.
EOF
done

#6


1  

I don't know in shell or perl but in python would be:

在shell或perl中我不知道,但是在python中我知道:

#!/usr/bin/python

for i in xrange(1000):
    with open('file%0.3d' %i,'w') as fd:
        fd.write('some text')

I think is pretty straightforward what it does.

我认为这很简单。

#7


1  

You can use only Bash with no externals and still be able to pad the numbers so the filenames sort properly (if needed):

您只能使用没有外部参数的Bash,并且仍然能够填充数字,以便文件名能够正确排序(如果需要):

read -r -d '' text << 'EOF'
Some text for
my files
EOF

for i in {1..1000}
do
    printf -v filename "file%04d" "$i"
    echo "$text" > "$filename"
done

Bash 4 can do it like this:

Bash 4可以这样做:

for filename in file{0001..1000}; do echo $text > $filename; done

Both versions produce filenames like "file0001" and "file1000".

两个版本都生成文件名,如“file0001”和“file1000”。

#8


1  

Just take any big file that has more than 1000 bytes (for 1000 files with content). There are lots of them on your computer. Then do (for example):

只要取任何大于1000字节的大文件(1000个包含内容的文件)。你的电脑上有很多。然后(举个例子):

split -n 1000 /usr/bin/firefox

This is instantly fast.

这是立即快。

Or a bigger file:

或一个更大的文件:

split -n 10000 /usr/bin/cat

This took only 0.253 seconds for creating 10000 files.

创建10000个文件只需要0.253秒。

For 100k files:

为100 k的文件:

split -n 100000 /usr/bin/gcc

Only 1.974 seconds for 100k files with about 5 bytes each.

100k文件只有1.974秒,每个文件约5字节。

If you only want files with text, look at your /etc directory. Create one million text files with almost random text:

如果您只想要文本文件,请查看您的/etc目录。创建100万个几乎是随机文本的文本文件:

split -n 1000000 /etc/gconf/schemas/gnome-terminal.schemas

20.203 seconds for 1M files with about 2 bytes each. If you divide this big file in only 10k parts it only takes 0.220 seconds and each file has 256 bytes of text.

1M文件203秒,约2字节。如果你把这个大文件分成10k个部分,只需要0.220秒,每个文件有256字节的文本。

#9


0  

Here is a short command-line Perl program.

下面是一个简短的命令行Perl程序。

perl -E'say $_ $_ for grep {open $_, ">f$_"} 1..1000'

#1


8  

for i in {0001..1000}
do
  echo "some text" > "file_${i}.txt"
done

or if you want to use Python <2.6

或者如果您想使用Python <2.6

for x in range(1000):
    open("file%03d.txt" % x,"w").write("some text")

#2


6  

#!/bin/bash
seq 1 1000 | split -l 1 -a 3 -d - file

Above will create 1000 files with each file having a number from 1 to 1000. The files will be named file000 ... file999

上面将创建1000个文件,每个文件的编号从1到1000。这些文件将被命名为file000…file999

#3


4  

In Perl:

在Perl中:

use strict;
use warnings;

for my $i (1..1000) {
   open(my $out,">",sprintf("file%04d",$i));
   print $out "some text\n";
   close $out;
}

Why the first 2 lines? Because they are good practice so I use them even in 1-shot programs like these.

为什么是前两行?因为它们是很好的练习,所以我在这样的单镜头程序中也会用到它们。

Regards, Offer

问候,提供

#4


3  

For variety:

品种:

#!/usr/bin/perl

use strict; use warnings;
use File::Slurp;

write_file $_, "$_\n" for map sprintf('file%04d.txt', $_), 1 .. 1000;

#5


2  

#!/bin/bash

for suf in $(seq -w 1000)
do
        cat << EOF > myfile.$suf
        this is my text file
        there are many like it
        but this one is mine.
EOF
done

#6


1  

I don't know in shell or perl but in python would be:

在shell或perl中我不知道,但是在python中我知道:

#!/usr/bin/python

for i in xrange(1000):
    with open('file%0.3d' %i,'w') as fd:
        fd.write('some text')

I think is pretty straightforward what it does.

我认为这很简单。

#7


1  

You can use only Bash with no externals and still be able to pad the numbers so the filenames sort properly (if needed):

您只能使用没有外部参数的Bash,并且仍然能够填充数字,以便文件名能够正确排序(如果需要):

read -r -d '' text << 'EOF'
Some text for
my files
EOF

for i in {1..1000}
do
    printf -v filename "file%04d" "$i"
    echo "$text" > "$filename"
done

Bash 4 can do it like this:

Bash 4可以这样做:

for filename in file{0001..1000}; do echo $text > $filename; done

Both versions produce filenames like "file0001" and "file1000".

两个版本都生成文件名,如“file0001”和“file1000”。

#8


1  

Just take any big file that has more than 1000 bytes (for 1000 files with content). There are lots of them on your computer. Then do (for example):

只要取任何大于1000字节的大文件(1000个包含内容的文件)。你的电脑上有很多。然后(举个例子):

split -n 1000 /usr/bin/firefox

This is instantly fast.

这是立即快。

Or a bigger file:

或一个更大的文件:

split -n 10000 /usr/bin/cat

This took only 0.253 seconds for creating 10000 files.

创建10000个文件只需要0.253秒。

For 100k files:

为100 k的文件:

split -n 100000 /usr/bin/gcc

Only 1.974 seconds for 100k files with about 5 bytes each.

100k文件只有1.974秒,每个文件约5字节。

If you only want files with text, look at your /etc directory. Create one million text files with almost random text:

如果您只想要文本文件,请查看您的/etc目录。创建100万个几乎是随机文本的文本文件:

split -n 1000000 /etc/gconf/schemas/gnome-terminal.schemas

20.203 seconds for 1M files with about 2 bytes each. If you divide this big file in only 10k parts it only takes 0.220 seconds and each file has 256 bytes of text.

1M文件203秒,约2字节。如果你把这个大文件分成10k个部分,只需要0.220秒,每个文件有256字节的文本。

#9


0  

Here is a short command-line Perl program.

下面是一个简短的命令行Perl程序。

perl -E'say $_ $_ for grep {open $_, ">f$_"} 1..1000'