将C#方法转换为C ++方法

时间:2022-09-01 16:49:15

I'm exploring various options for mapping common C# code constructs to C++ CUDA code for running on a GPU. The structure of the system is as follows (arrows represent method calls):

我正在探索将常见的C#代码构造映射到C ++ CUDA代码以在GPU上运行的各种选项。系统结构如下(箭头表示方法调用):

C# program -> C# GPU lib -> C++ CUDA implementation lib

C#程序 - > C#GPU lib - > C ++ CUDA实现库

A method in the GPU library could look something like this:

GPU库中的方法可能如下所示:

public static void Map<T>(this ICollection<T> c, Func<T,T> f)
{
   //Call 'f' on each element of 'c'
}

This is an extension method to ICollection<> types which runs a function on each element. However, what I would like it to do is to call the C++ library and make it run the methods on the GPU. This would require the function to be, somehow, translated into C++ code. Is this possible?

这是ICollection <>类型的扩展方法,它在每个元素上运行一个函数。但是,我希望它能够调用C ++库并使其在GPU上运行方法。这将要求函数以某种方式转换为C ++代码。这可能吗?

To elaborate, if the user of my library executes a method (in C#) with some arbitrary code in it, I would like to translate this code into the C++ equivelant such that I can run it on CUDA. I have the feeling that there are no easy way to do this but I would like to know if there are any way to do it or to achieve some of the same effect.

详细说明,如果我的库的用户执行了一个带有一些任意代码的方法(在C#中),我想将这段代码翻译成C ++ equivelant,以便我可以在CUDA上运行它。我觉得没有简单的方法可以做到这一点,但我想知道是否有任何方法可以做到这一点或达到一些相同的效果。

One thing I was wondering about is capturing the function to translate in an Expression and use this to map it to a C++ equivelant. Anyone has any experience with this?

我想知道的一件事是捕获要在Expression中转换的函数,并使用它将它映射到C ++ equivelant。有人有这方面的经验吗?

6 个解决方案

#1


7  

There's CUDA.Net if you want some reference how C# can be run on GPU.

如果你想要一些参考如何在GPU上运行C#,那就是CUDA.Net。

#2


2  

To be honest, I'm not sure I fully understand what you are getting at. However, you may be interested in this project which converts .Net applications / libraries into straight C++ w/o any .Net framework required. http://www.codeplex.com/crossnet

说实话,我不确定我是否完全理解你的目标。但是,您可能对此项目感兴趣,该项目将.Net应用程序/库转换为直接C ++,无需任何.Net框架。 http://www.codeplex.com/crossnet

#3


1  

I would recommend the following process to accelerate some of your computation using CUDA from a C# program:

我建议使用以下过程来加速从C#程序使用CUDA进行的一些计算:

  • First, create an unmanaged C++ library that you P/Invoke for the functions you want to accelerate. This will restrict you more or less to the data types you can easily work with in CUDA.
  • 首先,创建一个非托管C ++库,您可以为要加速的函数进行P / Invoke。这将或多或少地限制您可以在CUDA中轻松使用的数据类型。

  • Integrate your unmanaged library with your C# application. If you're doing things correctly, you should already notice some kind of speed up. If not, you should probably give up.
  • 将非托管库与C#应用程序集成。如果你正确地做事,你应该已经注意到某种加速。如果没有,你应该放弃。

  • Replace the C++ functions inside your library (without changing its interface) to perform the computations on the GPU with CUDA kernels.
  • 替换库中的C ++函数(不更改其接口)以使用CUDA内核在GPU上执行计算。

#4


0  

Interesting question. I'm not very expert at C#, but I think an ICollection is a container of objects. If each element of c was, say, a pixel, you'd have to do a lot of marshalling to convert that into a buffer of bytes or floats that CUDA could use. I suspect that would slow everything down enough to negate the advantage of doing anything on the gpu.

有趣的问题。我不是C#的专家,但我认为ICollection是对象的容器。如果c的每个元素都是一个像素,那么你必须进行大量的编组操作才能将其转换为CUDA可以使用的字节或浮点缓冲区。我怀疑这会使一切都放慢到足以否定在gpu上做任何事情的优势。

#5


0  

What you could do would be to write an own IQueryable LINQ provider, as is done for LINQ to SQL to translate LINQ queries to SQL.

您可以做的是编写自己的IQueryable LINQ提供程序,就像LINQ to SQL将LINQ查询转换为SQL一样。

However, one problem that I see with this approach is the fact that LINQ queries are usually evaluated lazily. In order to benefit from pipelining, this is probably not a viable solution.

但是,我在这种方法中看到的一个问题是LINQ查询通常是懒惰地评估的。为了从流水线技术中受益,这可能不是一个可行的解决方案。

It might also be worth investigating how to implement Google’s MapReduce API for C# and CUDA and then use an approach similar to PyCuda to ship the logic to the GPU. In that context, it might also be useful to take a look at the already existing MapReduce implementation in CUDA.

也许值得研究如何为C#和CUDA实现Google的MapReduce API,然后使用类似于PyCuda的方法将逻辑发送到GPU。在这种情况下,查看CUDA中已有的MapReduce实现可能也很有用。

#6


0  

That's a very interesting question and I have no idea how to do this.

这是一个非常有趣的问题,我不知道该怎么做。

However, the Brahma library seems to do something very similar. You can define functions using LINQ which are then compiled to GLSL shaders to run efficiently on a GPU. Have a look at their code and in particular the Game of Life sample.

然而,Brahma图书馆似乎做了非常相似的事情。您可以使用LINQ定义函数,然后将其编译为GLSL着色器以在GPU上高效运行。看看他们的代码,特别是Game of Life样本。

#1


7  

There's CUDA.Net if you want some reference how C# can be run on GPU.

如果你想要一些参考如何在GPU上运行C#,那就是CUDA.Net。

#2


2  

To be honest, I'm not sure I fully understand what you are getting at. However, you may be interested in this project which converts .Net applications / libraries into straight C++ w/o any .Net framework required. http://www.codeplex.com/crossnet

说实话,我不确定我是否完全理解你的目标。但是,您可能对此项目感兴趣,该项目将.Net应用程序/库转换为直接C ++,无需任何.Net框架。 http://www.codeplex.com/crossnet

#3


1  

I would recommend the following process to accelerate some of your computation using CUDA from a C# program:

我建议使用以下过程来加速从C#程序使用CUDA进行的一些计算:

  • First, create an unmanaged C++ library that you P/Invoke for the functions you want to accelerate. This will restrict you more or less to the data types you can easily work with in CUDA.
  • 首先,创建一个非托管C ++库,您可以为要加速的函数进行P / Invoke。这将或多或少地限制您可以在CUDA中轻松使用的数据类型。

  • Integrate your unmanaged library with your C# application. If you're doing things correctly, you should already notice some kind of speed up. If not, you should probably give up.
  • 将非托管库与C#应用程序集成。如果你正确地做事,你应该已经注意到某种加速。如果没有,你应该放弃。

  • Replace the C++ functions inside your library (without changing its interface) to perform the computations on the GPU with CUDA kernels.
  • 替换库中的C ++函数(不更改其接口)以使用CUDA内核在GPU上执行计算。

#4


0  

Interesting question. I'm not very expert at C#, but I think an ICollection is a container of objects. If each element of c was, say, a pixel, you'd have to do a lot of marshalling to convert that into a buffer of bytes or floats that CUDA could use. I suspect that would slow everything down enough to negate the advantage of doing anything on the gpu.

有趣的问题。我不是C#的专家,但我认为ICollection是对象的容器。如果c的每个元素都是一个像素,那么你必须进行大量的编组操作才能将其转换为CUDA可以使用的字节或浮点缓冲区。我怀疑这会使一切都放慢到足以否定在gpu上做任何事情的优势。

#5


0  

What you could do would be to write an own IQueryable LINQ provider, as is done for LINQ to SQL to translate LINQ queries to SQL.

您可以做的是编写自己的IQueryable LINQ提供程序,就像LINQ to SQL将LINQ查询转换为SQL一样。

However, one problem that I see with this approach is the fact that LINQ queries are usually evaluated lazily. In order to benefit from pipelining, this is probably not a viable solution.

但是,我在这种方法中看到的一个问题是LINQ查询通常是懒惰地评估的。为了从流水线技术中受益,这可能不是一个可行的解决方案。

It might also be worth investigating how to implement Google’s MapReduce API for C# and CUDA and then use an approach similar to PyCuda to ship the logic to the GPU. In that context, it might also be useful to take a look at the already existing MapReduce implementation in CUDA.

也许值得研究如何为C#和CUDA实现Google的MapReduce API,然后使用类似于PyCuda的方法将逻辑发送到GPU。在这种情况下,查看CUDA中已有的MapReduce实现可能也很有用。

#6


0  

That's a very interesting question and I have no idea how to do this.

这是一个非常有趣的问题,我不知道该怎么做。

However, the Brahma library seems to do something very similar. You can define functions using LINQ which are then compiled to GLSL shaders to run efficiently on a GPU. Have a look at their code and in particular the Game of Life sample.

然而,Brahma图书馆似乎做了非常相似的事情。您可以使用LINQ定义函数,然后将其编译为GLSL着色器以在GPU上高效运行。看看他们的代码,特别是Game of Life样本。