使用Java和awt.Robot时提高屏幕捕获速度

时间:2023-02-04 15:42:54

Edit: If anyone also has any other recommendations for increasing performance of screen capture please feel free to share as it might fully address my problem!

编辑:如果有人对提高屏幕截图的性能有任何其他建议,请随意分享,因为它可以完全解决我的问题!

Hello Fellow Developers,

其他开发人员你好,

I'm working on some basic screen capture software for myself. As of right now I've got some proof of concept/tinkering code that uses java.awt.Robot to capture the screen as a BufferedImage. Then I do this capture for a specified amount of time and afterwards dump all of the pictures to disk. From my tests I'm getting about 17 frames per second.

我正在为自己制作一些基本的屏幕截图软件。现在,我得到了一些使用java.awt的概念/修补代码的证明。机器人捕捉屏幕作为缓冲编辑。然后我在指定的时间内进行捕获,然后将所有的图片转储到磁盘。从我的测试中,我得到大约17帧每秒。

Trial #1

Length: 15 seconds Images Captured: 255

长度:15秒图像捕获:255

Trial #2

Length: 15 seconds Images Captured: 229

长度:15秒拍摄图像:229。

Obviously this isn't nearly good enough for a real screen capture application. Especially since these capture were me just selecting some text in my IDE and nothing that was graphically intensive.

显然,对于真正的屏幕捕获应用程序来说,这还远远不够。特别是由于这些捕获只是我在IDE中选择了一些文本,没有任何图形密集的内容。

I have two classes right now a Main class and a "Monitor" class. The Monitor class contains the method for capturing the screen. My Main class has a loop based on time that calls the Monitor class and stores the BufferedImage it returns into an ArrayList of BufferedImages. If I modify my main class to spawn several threads that each execute that loop and also collect information about the system time of when the image was captured could I increase performance? My idea is to use a shared data structure that will automatically sort the frames based on capture time as I insert them, instead of a single loop that inserts successive images into an arraylist.

我现在有两个班,一个是主班,一个是“班长”班。Monitor类包含捕获屏幕的方法。我的主类有一个基于时间的循环,它调用Monitor类并存储BufferedImage,它返回到BufferedImages的ArrayList中。如果我修改我的主类,生成几个线程,每个线程执行该循环,并收集有关捕获映像的系统时间的信息,我可以提高性能吗?我的想法是使用一个共享的数据结构,当我插入它们时,它会自动地对帧进行排序,而不是将连续的图像插入到arraylist中。

Code:

代码:

Monitor

public class Monitor {

/**
 * Returns a BufferedImage
 * @return
 */
public BufferedImage captureScreen() {
    Rectangle screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
    BufferedImage capture = null;

    try {
        capture = new Robot().createScreenCapture(screenRect);
    } catch (AWTException e) {
        e.printStackTrace();
    }

    return capture;
}
}

Main

public class Main {


public static void main(String[] args) throws InterruptedException {
    String outputLocation = "C:\\Users\\ewillis\\Pictures\\screenstreamer\\";
    String namingScheme = "image";
    String mediaFormat = "jpeg";
    DiscreteOutput output = DiscreteOutputFactory.createOutputObject(outputLocation, namingScheme, mediaFormat);

    ArrayList<BufferedImage> images = new ArrayList<BufferedImage>();
    Monitor m1 = new Monitor();

    long startTimeMillis = System.currentTimeMillis();
    long recordTimeMillis = 15000;

    while( (System.currentTimeMillis() - startTimeMillis) <= recordTimeMillis ) {
        images.add( m1.captureScreen() );
    }

    output.saveImages(images);

}
}

2 个解决方案

#1


3  

Re-using the screen rectangle and robot class instances will save you a little overhead. The real bottleneck is storing all your BufferedImage's into an array list.

重用屏幕矩形和机器人类实例将节省一些开销。真正的瓶颈是将所有BufferedImage存储到数组列表中。

I would first benchmark how fast your robot.createScreenCapture(screenRect); call is without any IO (no saving or storing the buffered image). This will give you an ideal throughput for the robot class.

我将首先测试你的机器人有多快。createscreencapture (screenRect);调用没有任何IO(没有保存或存储缓冲映像)。这将为您提供一个理想的机器人类吞吐量。

long frameCount = 0;
while( (System.currentTimeMillis() - startTimeMillis) <= recordTimeMillis ) {
    image = m1.captureScreen();
    if(image !== null) {
        frameCount++;
    }
    try {
        Thread.yield();
    } catch (Exception ex) {
    }
}

If it turns out that captureScreen can reach the FPS you want there is no need to multi-thread robot instances.

如果captureScreen可以访问您想要的FPS,那么就不需要使用多线程机器人实例。

Rather than having an array list of buffered images I'd have an array list of Futures from the AsynchronousFileChannel.write.

与拥有缓冲映像的数组列表不同,我需要一个来自AsynchronousFileChannel.write的未来数组列表。

  • Capture loop
  • 捕获循环
    • Get BufferedImage
    • 得到BufferedImage
  • 得到BufferedImage
    • Convert BufferedImage to byte array containing JPEG data
    • 将BufferedImage转换为包含JPEG数据的字节数组
  • 将BufferedImage转换为包含JPEG数据的字节数组
    • Create an async channel to the output file
    • 为输出文件创建一个异步通道
  • 为输出文件创建一个异步通道
    • Start a write and add the immediate return value (the future) to your ArrayList
    • 启动写操作并将当前返回值(未来)添加到ArrayList中
  • 启动写操作并将当前返回值(未来)添加到ArrayList中
  • Wait loop
  • 等待循环
    • Go through your ArrayList of Futures and make sure they all finished
    • 检查你的期货清单,确保它们都完成了
  • 检查你的期货清单,确保它们都完成了

#2


2  

I guess that the intensive memory usage is an issue here. You are capturing in your tests about 250 screenshots. Depending on the screen resolution, this is:

我想密集的内存使用是一个问题。你在测试中捕获了大约250个屏幕截图。根据屏幕分辨率,这是:

1280x800 : 250 * 1280*800  * 3/1024/1024 ==  732 MB data
1920x1080: 250 * 1920*1080 * 3/1024/1024 == 1483 MB data

Try caputuring without keeping all those images in memory.

不要把所有的图片都记在记忆里,试着去伪装。

As @Obicere said, it is a good idea to keep the Robot instance alive.

正如@Obicere所说,让机器人实例存活是一个好主意。

#1


3  

Re-using the screen rectangle and robot class instances will save you a little overhead. The real bottleneck is storing all your BufferedImage's into an array list.

重用屏幕矩形和机器人类实例将节省一些开销。真正的瓶颈是将所有BufferedImage存储到数组列表中。

I would first benchmark how fast your robot.createScreenCapture(screenRect); call is without any IO (no saving or storing the buffered image). This will give you an ideal throughput for the robot class.

我将首先测试你的机器人有多快。createscreencapture (screenRect);调用没有任何IO(没有保存或存储缓冲映像)。这将为您提供一个理想的机器人类吞吐量。

long frameCount = 0;
while( (System.currentTimeMillis() - startTimeMillis) <= recordTimeMillis ) {
    image = m1.captureScreen();
    if(image !== null) {
        frameCount++;
    }
    try {
        Thread.yield();
    } catch (Exception ex) {
    }
}

If it turns out that captureScreen can reach the FPS you want there is no need to multi-thread robot instances.

如果captureScreen可以访问您想要的FPS,那么就不需要使用多线程机器人实例。

Rather than having an array list of buffered images I'd have an array list of Futures from the AsynchronousFileChannel.write.

与拥有缓冲映像的数组列表不同,我需要一个来自AsynchronousFileChannel.write的未来数组列表。

  • Capture loop
  • 捕获循环
    • Get BufferedImage
    • 得到BufferedImage
  • 得到BufferedImage
    • Convert BufferedImage to byte array containing JPEG data
    • 将BufferedImage转换为包含JPEG数据的字节数组
  • 将BufferedImage转换为包含JPEG数据的字节数组
    • Create an async channel to the output file
    • 为输出文件创建一个异步通道
  • 为输出文件创建一个异步通道
    • Start a write and add the immediate return value (the future) to your ArrayList
    • 启动写操作并将当前返回值(未来)添加到ArrayList中
  • 启动写操作并将当前返回值(未来)添加到ArrayList中
  • Wait loop
  • 等待循环
    • Go through your ArrayList of Futures and make sure they all finished
    • 检查你的期货清单,确保它们都完成了
  • 检查你的期货清单,确保它们都完成了

#2


2  

I guess that the intensive memory usage is an issue here. You are capturing in your tests about 250 screenshots. Depending on the screen resolution, this is:

我想密集的内存使用是一个问题。你在测试中捕获了大约250个屏幕截图。根据屏幕分辨率,这是:

1280x800 : 250 * 1280*800  * 3/1024/1024 ==  732 MB data
1920x1080: 250 * 1920*1080 * 3/1024/1024 == 1483 MB data

Try caputuring without keeping all those images in memory.

不要把所有的图片都记在记忆里,试着去伪装。

As @Obicere said, it is a good idea to keep the Robot instance alive.

正如@Obicere所说,让机器人实例存活是一个好主意。