SQL Server 2005/08中的对等复制

时间:2022-12-06 10:06:30

Has anyone had any experience in setting up peer to peer replication using SQL Server 2005 or 2008?

有没有人有使用SQL Server 2005或2008建立对等复制的经验?

Specifically, I'm interested in whether other options/alternatives where considered and why P2P replication was ultimately chosen.

具体来说,我对是否考虑其他选项/备选方案以及最终选择P2P复制的原因感兴趣。

If you have used P2P replication:

如果您使用过P2P复制:

  • Did you encounter any issues during synchronization and was it easy to monitor?
  • 您是否在同步过程中遇到任何问题并且易于监控?

  • How easy was/is it to do conflict resolution?
  • 解决冲突有多容易?

  • Did you have to make schema changes (i.e. replace identity columns, etc)?
  • 您是否必须进行架构更改(即替换标识列等)?

    Alternatively, if you considered P2P replication and went with a different option, why did you rule it out?

    或者,如果您考虑使用P2P复制并使用其他选项,为什么要将其排除在外?

    1 个解决方案

    #1


    2  

    (Disclaimer: I'm a developer, not a DBA)

    (免责声明:我是开发人员,而不是DBA)

    We have SQL Server 2005 merge replication set up to replicate between two active/active geographically-separated nodes for resilience in a legacy system.

    我们将SQL Server 2005合并复制设置为在两个活动/活动地理位置分离的节点之间进行复制,以便在遗留系统中实现弹性。

    I don't know whether it's easy to monitor; outside of my remit.

    我不知道监控是否容易;在我的职权范围之外。

    It creates triggers on every table to do the publish/subscribe mechanism, each of which calls its own stored procedure.

    它在每个表上创建触发器来执行发布/订阅机制,每个机制都调用自己的存储过程。

    In our case, it was set up to use identities 1-1bn in node 0, 1bn-2bn in node 1 to avoid identity collisions (rather than use a composite key of NodeId + EntityId for each table, or change keys to be GUIDs, for example).

    在我们的例子中,它被设置为在节点0中使用1-1bn的身份,在节点1中使用1bn-2bn来避免身份冲突(而不是为每个表使用NodeId + EntityId的复合密钥,或者将密钥更改为GUID,例如)。

    I think the replication latency is around 15s (between London and New York over dedicated bandwidth).

    我认为复制延迟大约是15秒(伦敦和纽约之间的专用带宽)。

    It is a huge pain to work with:

    与之合作是一件巨大的痛苦:

    • It took a highly paid contractor a year to set it up (granted, part of this was due to the legacy nature of the DB design)
    • 高薪的承包商花了一年时间进行设置(授予,部分原因是由于DB设计的遗留性质)

    • We lack anyone in-house with the expertise to support it (the in-house DBA we had took ~6 months to learn it, and has since moved on)
    • 我们缺乏内部任何支持它的专业知识(内部DBA我们花了大约6个月的时间来学习它,并且后​​来继续前进)

    • Schema updates are now painful. From what I understand:
      • Certain updates must be performed on only one node; replication then takes care of figuring out what to do on the other node(s)
      • 某些更新必须仅在一个节点上执行;复制然后负责找出在其他节点上做什么

      • Certain updates must be performed on both nodes
      • 必须在两个节点上执行某些更新

      • Data updates must be performed on one node only (I think)
      • 数据更新必须仅在一个节点上执行(我认为)

      • All updates now take significantly longer to perform - from the split-second it takes to run a DDL change-script to ~30 minutes
      • 所有更新现在需要更长的时间才能执行 - 从运行DDL更改脚本所需的瞬间到约30分钟

    • 架构更新现在很痛苦。据我所知:某些更新必须只在一个节点上执行;复制然后负责确定在另一个节点上做什么必须在两个节点上执行某些更新数据更新只能在一个节点上执行(我认为)所有更新现在需要更长的时间来执行 - 从分割 - 运行DDL更改脚本到第30分钟所需的第二步

    • I don't know for sure, but I think the bandwidth requirement for replication is very high (in the MBit/s range)
    • 我不确定,但我认为复制的带宽要求非常高(在MBit / s范围内)

    • It introduces many "noise" objects (3 sprocs per table, 3 triggers per table) into the DB, making it inconvenient to find in the object explorer the item that one wants to work on.
    • 它在DB中引入了许多“噪声”对象(每个表3个sprocs,每个表3个触发器),这使得在对象资源管理器中找到想要处理的项目变得不方便。

    • We will never set up a third node for this system, based largely on the perceived difficulty and added pain it would introduce at deployment-time.
    • 我们永远不会为这个系统建立第三个节点,主要基于在部署时会产生的感知难度和增加的痛苦。

    • We also now lack a staging environment that mirrors production, because it's too painful to set up.
    • 我们现在还缺少一个反映生产的临时环境,因为设置起来太痛苦了。

    • Anecdotal: The DBA doing the setup would frequently curse the fact that it was an "MS v1" he was being forced to work with.
    • 轶事:进行设置的DBA经常会诅咒这是他*使用的“MS v1”这一事实。

    • Dimly remembered: The DBA needed to raise several priority support tickets to get help from MS directly.
    • 朦胧地记得:DBA需要提出几张优先支持票,以便直接从MS获得帮助。

    Granted - some of the pain involved is due to our specific environment and not having in-house talent to support this setup. Your mileage may vary.

    当然 - 所涉及的一些痛苦是由于我们特定的环境而没有内部人才来支持这种设置。你的旅费可能会改变。

    #1


    2  

    (Disclaimer: I'm a developer, not a DBA)

    (免责声明:我是开发人员,而不是DBA)

    We have SQL Server 2005 merge replication set up to replicate between two active/active geographically-separated nodes for resilience in a legacy system.

    我们将SQL Server 2005合并复制设置为在两个活动/活动地理位置分离的节点之间进行复制,以便在遗留系统中实现弹性。

    I don't know whether it's easy to monitor; outside of my remit.

    我不知道监控是否容易;在我的职权范围之外。

    It creates triggers on every table to do the publish/subscribe mechanism, each of which calls its own stored procedure.

    它在每个表上创建触发器来执行发布/订阅机制,每个机制都调用自己的存储过程。

    In our case, it was set up to use identities 1-1bn in node 0, 1bn-2bn in node 1 to avoid identity collisions (rather than use a composite key of NodeId + EntityId for each table, or change keys to be GUIDs, for example).

    在我们的例子中,它被设置为在节点0中使用1-1bn的身份,在节点1中使用1bn-2bn来避免身份冲突(而不是为每个表使用NodeId + EntityId的复合密钥,或者将密钥更改为GUID,例如)。

    I think the replication latency is around 15s (between London and New York over dedicated bandwidth).

    我认为复制延迟大约是15秒(伦敦和纽约之间的专用带宽)。

    It is a huge pain to work with:

    与之合作是一件巨大的痛苦:

    • It took a highly paid contractor a year to set it up (granted, part of this was due to the legacy nature of the DB design)
    • 高薪的承包商花了一年时间进行设置(授予,部分原因是由于DB设计的遗留性质)

    • We lack anyone in-house with the expertise to support it (the in-house DBA we had took ~6 months to learn it, and has since moved on)
    • 我们缺乏内部任何支持它的专业知识(内部DBA我们花了大约6个月的时间来学习它,并且后​​来继续前进)

    • Schema updates are now painful. From what I understand:
      • Certain updates must be performed on only one node; replication then takes care of figuring out what to do on the other node(s)
      • 某些更新必须仅在一个节点上执行;复制然后负责找出在其他节点上做什么

      • Certain updates must be performed on both nodes
      • 必须在两个节点上执行某些更新

      • Data updates must be performed on one node only (I think)
      • 数据更新必须仅在一个节点上执行(我认为)

      • All updates now take significantly longer to perform - from the split-second it takes to run a DDL change-script to ~30 minutes
      • 所有更新现在需要更长的时间才能执行 - 从运行DDL更改脚本所需的瞬间到约30分钟

    • 架构更新现在很痛苦。据我所知:某些更新必须只在一个节点上执行;复制然后负责确定在另一个节点上做什么必须在两个节点上执行某些更新数据更新只能在一个节点上执行(我认为)所有更新现在需要更长的时间来执行 - 从分割 - 运行DDL更改脚本到第30分钟所需的第二步

    • I don't know for sure, but I think the bandwidth requirement for replication is very high (in the MBit/s range)
    • 我不确定,但我认为复制的带宽要求非常高(在MBit / s范围内)

    • It introduces many "noise" objects (3 sprocs per table, 3 triggers per table) into the DB, making it inconvenient to find in the object explorer the item that one wants to work on.
    • 它在DB中引入了许多“噪声”对象(每个表3个sprocs,每个表3个触发器),这使得在对象资源管理器中找到想要处理的项目变得不方便。

    • We will never set up a third node for this system, based largely on the perceived difficulty and added pain it would introduce at deployment-time.
    • 我们永远不会为这个系统建立第三个节点,主要基于在部署时会产生的感知难度和增加的痛苦。

    • We also now lack a staging environment that mirrors production, because it's too painful to set up.
    • 我们现在还缺少一个反映生产的临时环境,因为设置起来太痛苦了。

    • Anecdotal: The DBA doing the setup would frequently curse the fact that it was an "MS v1" he was being forced to work with.
    • 轶事:进行设置的DBA经常会诅咒这是他*使用的“MS v1”这一事实。

    • Dimly remembered: The DBA needed to raise several priority support tickets to get help from MS directly.
    • 朦胧地记得:DBA需要提出几张优先支持票,以便直接从MS获得帮助。

    Granted - some of the pain involved is due to our specific environment and not having in-house talent to support this setup. Your mileage may vary.

    当然 - 所涉及的一些痛苦是由于我们特定的环境而没有内部人才来支持这种设置。你的旅费可能会改变。