Вы находитесь на странице: 1из 11

1 # drbd.

conf example Los parametros usualmente que cambian son hostname, device, disk, meta-disk, address y port en la seccin on <hostname>{ } Es recommendable saber de protocol y timeout. you probably want to set the rate in the syncer sections Probablemente se quiera configurar la velocidad(rate) en la seccion syncer. Nota error comun: La velocidad esta dada en unidades de byte, no de bit. Ejemplos de detalles a configurar con drbd.conf: Incrementando timeout (tiempo de espera) y posiblemente ping-int en la seccin net{}, podra resolver problemas que tengan el mensaje "connection lost/connection established" (o cambiando la instalacin para reducir la latencia de la red; asegura una conexin full-duplex; checar el promedio de ida y vuelta average roundtrip times si la red esta saturada) skip { Como se puede ver, es posible comentar lneas o texto. Con la seccion 'skip[optional nonsense]{ skipped text }. El formato basico de asignacion de esta opcion es: <option name><linear whitespace><value>; Esto debera ser obvio con el ejemplo abajo: <option name> := valid options in the respective scope <value> := <num>|<string>|<choice>|... depending on the set of allowed values for the respective option. <num> := [0-9]+, sometimes with an optional suffix of K,M,G <string> := (<name>|\"([^\"\\\n]*|\\.)*\")+ <name> := [/_.A-Za-z0-9-]+ } # At most ONE global section is allowed. 49 # Esto debe proceder a cualqier seccion de recursos. global { Por default cargaremos el modulo con minor-count de 32 en tu caso tienes mas Dispositivos(devices) en tu caonfiguracion, el modulo obtiene cargar con minor-count que asegura que tengas 10 menores de respuesto. El caso de los 10 menores son demasiados. Puedes colocar el minor-count explcitamente aqu. Minor count por default tiene solo un poco mas de lo necesario minor-count 64; El cuadro de dialogo del usuario cuenta y despliega los segundos de espera. Tu puedes desactivar esto si tu tienes la consola del servidor coenctada La ventana de dialogo del usuario cuenta y despliega los segundos en espera. Es posible desactivar esta opcin si se tiene la consola del servidor conectada en cable serial con capacidad limited logging.

dialog-refresh 5; # 5 seconds El cuadro de dialogo imprimira el conteo cada 'dialog-refresh' segundos,


Colocar esto a cero para desactivas redibujo completamente [ default = 1 ] disable-ip-verification; Activa/desactiva la prueba d valide de drbdadm. usage-count yes;

participa en el conteo en linea de usuarios DRBD en http://usage.drbd.org, tiene las posibles opciones ask, yes, no. En caso de no saber colocar la opcin ask } La seccion common puede tener todas las secciones de un recurso excepto la seccion host, debe empezar con on. La seccin on debe proceder a todos los recursos Todos los recursos que heredan los settings de la seccin common. Donde los settings de los recursos tiene mas procedencia sobre los settings de common. common { syncer { rate 10M; } } resource r0 { 100 # Protoclos de trasnferencia a usar. 102 # C: write IO is reported as completed, if 103 # reached _both_ local and remote DISK. 104 # * for critical transactional data. 105 # B: write IO is reported as completed, if 106 # local DISK and remote buffer cache. 107 # * for most cases. 108 # A: write IO is reported as completed, if 109 # local DISK and local tcp send buffer. 110 # * for high latency networks 111 # 112 #**********

we know it has it has reached it has reached (see also sndbuf-size)

Uhm, benchamark muestran que C es actualmente mejor que B. Esto no desaparecer cuando estamos convencidos que B es la eleccin correcta en la mayora de los casos. Siempre usar c a menos que se tenga razn en no hacerlo. protocol C; handlers { 123 # what should be done in case the node is primary, degraded 124 # (=no connection) and has inconsistent data. 125 pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; 126 127 # The node is currently primary, but lost the after split brain 128 # auto recovery procedure. As as consequence it should go away. 129 pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; 130 131 # In case you have set the on-io-error option to "call-local-io-error", 132 # this script will get executed in case of a local IO error. It is 133 # expected that this script will case a immediate failover in the 134 # cluster. 135 local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notifyemergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f"; 136 137 # Commands to run in case we need to downgrade the peer's disk 138 # state to "Outdated". Should be implemented by the superior 139 # communication possibilities of our cluster manager. 140 # The provided script uses ssh, and is for demonstration/development 141 # purposis. 142 # fence-peer "/usr/lib/drbd/outdate-peer.sh on amd 192.168.22.11 192.168.23.11 on alf 192.168.22.12 192.168.23.12"; 143 # 144 # Update: Now there is a solution that relies on heartbeat's 145 # communication layers. You should really use this. 146 fence-peer "/usr/lib/heartbeat/drbd-peer-outdater -t 5"; 147 # For Pacemaker you might use:

148 # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; 149 150 # The node is currently primary, but should become sync target 151 # after the negotiating phase. Alert someone about this incident. 152 #pri-lost "/usr/lib/drbd/notify-pri-lost.sh; /usr/lib/drbd/notifyemergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f"; 153 154 # Notify someone of a split brain immediately, regardless of auto recovery policies. 155 #initial-split-brain "/usr/lib/drbd/notify-split-brain.sh root"; 156 # Notify someone or maybe do something else if DRBD split brained, was not automatically 157 # recovered and resource is now sitting disconneted. 158 #split-brain "/usr/lib/drbd/notify-split-brain.sh root"; 159 # Notify someone in case an online verify run found the backing devices out of sync. 160 #out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; 161 # 162 163 # These two handlers can be used to snapshot sync-target devices 164 # before for the time of the resync. 165 # The provided scripts has these options: 166 # -p | --percent <reserve space in percent of the original volume. Default: 10%> 167 # -a | --additional <snapshot space in KiB. Default: 10 MiB> 168 # -n | --disconnect-on-error 169 # By default the script tells DRBD to do the resync no matter 170 # if the taking the snapshot works or not. 171 # If you prefer to drop connection in case taking the snapshot 172 # failes use the --disconnect-on-error option. 173 # -v | --verbose 174 # -- <additional lvcreate options> 175 #before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 --c 16k"; 176 #after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; 177 } 178 179 startup { 180 # Wait for connection timeout. 181 # The init script blocks the boot process until the resources 182 # are connected. This is so when the cluster manager starts later, 183 # it does not see a resource with internal split-brain. 184 # In case you want to limit the wait time, do it here. 185 # Default is 0, which means unlimited. Unit is seconds. 186 # 187 # wfc-timeout 0; 188 189 # Wait for connection timeout if this node was a degraded cluster. 190 # In case a degraded cluster (= cluster with only one node left) 191 # is rebooted, this timeout value is used. 192 # 193 degr-wfc-timeout 120; # 2 minutes. 194 195 # Wait for connection timeout if the peer node is already outdated. 196 # (Do not set this to 0, since that means unlimited) 197 # 198 outdated-wfc-timeout 2; # 2 seconds. 199 200 # In case there was a split brain situation the devices will 201 # drop their network configuration instead of connecting. Since 202 # this means that the network is working, the cluster manager 203 # should be able to communicate as well. Therefore the default 204 # of DRBD's init script is to terminate in this case. To make 205 # it to continue waiting in this case set this option.

206 # 207 # wait-after-sb; 208 209 # In case you are using DRBD for GFS/OCFS2 you want that the 210 # startup script promotes it to primary. Nodenames are also 211 # possible instead of "both". 212 # become-primary-on both; 213 } 214 215 disk { 216 # if the lower level device reports io-error you have the choice of 217 # "pass_on" -> Report the io-error to the upper layers. 218 # Primary -> report it to the mounted file system. 219 # Secondary -> ignore it. 220 # "call-local-io-error" 221 # -> Call the script configured by the name "local-ioerror". 222 # "detach" -> The node drops its backing storage device, and 223 # continues in disk less mode. 224 # 225 on-io-error detach; 226 227 # Controls the fencing policy, default is "dont-care". Before you 228 # set any policy you need to make sure that you have a working 229 # fence-peer handler. Possible values are: 230 # "dont-care" -> Never call the fence-peer handler. [ DEFAULT ] 231 # "resource-only" -> Call the fence-peer handler if we primary and 232 # loose the connection to the secondary. As well 233 # whenn a unconnected secondary wants to become 234 # primary. 235 # "resource-and-stonith" 236 # -> Calls the fence-peer handler and freezes local 237 # IO immediately after loss of connection. This is 238 # necessary if your heartbeat can STONITH the other 239 # node. 240 # fencing resource-only; 241 242 # In case you only want to use a fraction of the available space 243 # you might use the "size" option here. 244 # 245 # size 10G; 246 247 # In case you are sure that your storage subsystem has battery 248 # backed up RAM and you know from measurements that it really honors 249 # flush instructions by flushing data out from its non volatile 250 # write cache to disk, you have double security. You might then 251 # reduce this to single security by disabling disk flushes with 252 # this option. It might improve performance in this case. 253 # ONLY USE THIS OPTION IF YOU KNOW WHAT YOU ARE DOING. 254 # no-disk-flushes; 255 # no-md-flushes; 256 257 # In some special circumstances the device mapper stack manages to 258 # pass BIOs to DRBD that violate the constraints that are set forth 259 # by DRBD's merge_bvec() function and which have more than one bvec. 260 # A known example is: 261 # phys-disk -> DRBD -> LVM -> Xen -> missaligned partition (63) -> DomU FS 262 # Then you might see "bio would need to, but cannot, be split:" in 263 # the Dom0's kernel log. 264 # The best workaround is to proper align the partition within 265 # the VM (E.g. start it at sector 1024). (Costs 480 KiByte of storage) 266 # Unfortunately the default of most Linux partitioning tools is 267 # to start the first partition at an odd number (63). Therefore 268 # most distribution's install helpers for virtual linux machines will

269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332

# # # # } net # # # # # # # # # # # # # # # # # # # # # # # # # # #

end up with missaligned partitions. The second best workaround is to limit DRBD's max bvecs per BIO (= max-bio-bvecs) to 1. (Costs performance). max-bio-bvecs 1; { this is the size of the tcp socket send buffer increase it _carefully_ if you want to use protocol A over a high latency network with reasonable write throughput. defaults to 2*65535; you might try even 1M, but if your kernel or network driver chokes on that, you have been warned. sndbuf-size 512k; timeout connect-int ping-int ping-timeout 60; 10; 10; 5; # 6 seconds (unit = 0.1 seconds) # 10 seconds (unit = 1 second) # 10 seconds (unit = 1 second) # 500 ms (unit = 0.1 seconds)

Maximal number of requests (4K) to be allocated by DRBD. The minimum is hardcoded to 32 (=128 kByte). For high performance installations it might help if you increase that number. These buffers are used to hold datablocks while they are written to disk. max-buffers 2048;

When the number of outstanding requests on a standby (secondary) node exceeds bdev-threshold, we start to kick the backing device to start its request processing. This is an advanced tuning parameter to get more performance out of capable storage controlers. Some controlers like to be kicked often, other controlers deliver better performance when they are kicked less frequently. Set it to the value of max-buffers to get the least possible number of run_task_queue_disk() / q->unplug_fn(q) calls. unplug-watermark 128;

# The highest number of data blocks between two write barriers. # If you set this < 10 you might decrease your performance. # max-epoch-size 2048; # if some block send times out this many times, the peer is # considered dead, even if it still answers ping requests. # ko-count 4; # # # # # # # # # # # # # # # # If you want to use OCFS2/openGFS on top of DRBD enable this optione, and only enable it if you are going to use one of these filesystems. Do not enable it for ext2, ext3,reiserFS,XFS,JFS etc... allow-two-primaries; This enables peer authentication. Without this everybody on the network could connect to one of your DRBD nodes with a program that emulates DRBD's protocoll and could suck off all your data. Specify one of the kernel's digest algorithms, e.g.: md5, sha1, sha256, sha512, wp256, wp384, wp512, michael_mic ... an a shared secret. Authentication is only done once after the TCP connection is establised, there are no disadvantages from using authentication, therefore I suggest to enable it in any case. cram-hmac-alg "sha1";

333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 0pri" 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395

# shared-secret "FooFunFactory"; # In case the nodes of your cluster nodes see each other again, after # an split brain situation in which both nodes where primary # at the same time, you have two diverged versions of your data. # # In case both nodes are secondary you can control DRBD's # auto recovery strategy by the "after-sb-0pri" options. The # default is to disconnect. # "disconnect" ... No automatic resynchronisation, simply disconnect. # "discard-younger-primary" # Auto sync from the node that was primary before # the split brain situation happened. # "discard-older-primary" # Auto sync from the node that became primary # as second during the split brain situation. # "discard-least-changes" # Auto sync from the node that touched more # blocks during the split brain situation. # "discard-node-NODENAME" # Auto sync _to_ the named node. after-sb-0pri disconnect; # In one of the nodes is already primary, then the auto-recovery # strategie is controled by the "after-sb-1pri" options. # "disconnect" ... always disconnect # "consensus" ... discard the version of the secondary if the outcome # of the "after-sb-0pri" algorithm would also destroy # the current secondary's data. Otherwise disconnect. # "violently-as0p" Always take the decission of the "after-sb-0pri" # algorithm. Even if that causes case an erratic change # of the primarie's view of the data. # This is only usefull if you use an 1node FS (i.e. # not OCFS2 or GFS) with the allow-two-primaries # flag, _AND_ you really know what you are doing. # This is DANGEROUS and MAY CRASH YOUR MACHINE if you # have a FS mounted on the primary node. # "discard-secondary" # discard the version of the secondary. # "call-pri-lost-after-sb" Always honour the outcome of the "after-sb# algorithm. In case it decides the the current # secondary has the right data, it panics the # current primary. # "suspend-primary" ??? after-sb-1pri disconnect; # In case both nodes are primary you control DRBD's strategy by # the "after-sb-2pri" option. # "disconnect" ... Go to StandAlone mode on both sides. # "violently-as0p" Always take the decission of the "after-sb-0pri". # "call-pri-lost-after-sb" ... Honor the outcome of the "after-sb-0pri" # algorithm and panic the other node. after-sb-2pri disconnect; # To solve the cases when the outcome of the resync descissions is # incompatible to the current role asignment in the cluster. # "disconnect" ... No automatic resynchronisation, simply disconnect. # "violently" .... Sync to the primary node is allowed, violating the # assumption that data on a block device is stable # for one of the nodes. DANGEROUS, DO NOT USE. # "call-pri-lost" Call the "pri-lost" helper program on one of the # machines. This program is expected to reboot the

396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459

# machine. (I.e. make it secondary.) rr-conflict disconnect; # DRBD-0.7's behaviour is equivalent to # after-sb-0pri discard-younger-primary; # after-sb-1pri consensus; # after-sb-2pri disconnect; # # # # # # # # # # # # # # # # # # } syncer { # Limit the bandwith used by the resynchronisation process. # default unit is kByte/sec; optional suffixes K,M,G are allowed. # # Even though this is a network setting, the units are based # on _byte_ (octet for our french friends) not bit. # We are storage guys. # # Note that on 100Mbit ethernet, you cannot expect more than # 12.5 MByte total transfer rate. # Consider using GigaBit Ethernet. # rate 10M; # Normally all devices are resynchronized parallel. # To achieve better resynchronisation performance you should resync # DRBD resources which have their backing storage on one physical # disk sequentially. The express this use the "after" keyword. after "r2"; # Configures the size of the active set. Each extent is 4M, # 257 Extents ~> 1GB active set size. In case your syncer # runs @ 10MB/sec, all resync after a primary's crash will last # 1GB / ( 10MB/sec ) ~ 102 seconds ~ One Minute and 42 Seconds. # BTW, the hash algorithm works best if the number of al-extents # is prime. (To test the worst case performace use a power of 2) al-extents 257; # Sets the CPU affinity mask of DRBD's threads. Might be of interest # for advanced performance tuning. # cpu-mask 15; # Alternatively to the fixed resync rate DRBD has a resync speed # controller. Its purpose is to resync as fast as possible, without # filling up queues along the data path. DRBD can ensure the data integrity of the user's data on the network by comparing hash values. Note: Normally this is ensured by the 16 bit checksums in the headers of TCP/IP packets. Unforunately it turned out that GBit NICs with various offloading engines might produce valid checksums for corrupted data. Use this option during your pre-production tests, usually you want to turn it off for production to reduce CPU overhead. Note2: If data blocks that gets written to disk are changed while the transfer goes on cause false positives. Known block device users which do so are the swap code and ReiserFS data-integrity-alg "md5"; DRBD usually uses the TCP socket option TCP_CORK to hint to the network stack when it can expect more data, and when it should flush out what it has in its send queue. It turned out that there is at lease one network stack that performs worse when one uses this hinting method. Therefore we introducted this option, which disable the setting and clearing of the TCP_CORK socket option by DRBD. no-tcp-cork;

460 461 462 (unit 463 (unit 464 (unit 465 466 cause 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519

# The controller gets enabled with setting c-plan-ahead to a value # greater than 0 (by default 0, i.e. it is disabled). #c-plan-ahead 10; # How long the controller should plan ahead = 0.1 seconds) #c-fill-target 0; # Aimed fill level. 0 to use c-delay-target = sectors) #c-delay-target 6; # Aimed delay by the drbd-proxy's fill level = 0.1 seconds) #c-max-rate 100M; # Upper bound for the controller #c-min-rate 4M; # below this rate, application IO will not extra throttling of resync # If the local disk and the connection to the peer fail cuncurrently, # DRBD fails IO requests by default. Alternatively you can set this to # "suspend-io". on-no-data-accessible io-error; } on amd { device /dev/drbd0; disk /dev/hde5; address 192.168.22.11:7788; flexible-meta-disk internal; # # # # # # # # # # # } on alf { device disk address meta-disk } } # # yes, you may also quote the resource name. # but don't include whitespace, unless you mean it :) # resource "r1" { protocol C; startup { wfc-timeout 0; ## Infinite! degr-wfc-timeout 120; ## 2 minutes. } disk { on-io-error detach; } net { # timeout 60; # connect-int 10; # ping-int 10; # max-buffers 2048; # max-epoch-size 2048; /dev/drbd0; /dev/hdc5; 192.168.22.12:7788; internal; meta-disk is either 'internal' or '/dev/ice/name [idx]' You can use a single block device to store meta-data of multiple DRBD's. E.g. use meta-disk /dev/hde6[0]; and meta-disk /dev/hde6[1]; for two different resources. In this case the meta-disk would need to be at least 256 MB in size. 'internal' means, that the last 128 MB of the lower device are used to store the meta-data. You must not give an index with 'internal'.

520 } 521 syncer { 522 } 523 524 # It is valid to move device, disk and meta-disk to the 525 # resource level. 526 device /dev/drbd1; 527 disk /dev/hde6; 528 meta-disk /dev/somewhere [7]; 529 530 on amd { 531 # Here is an example of ipv6. 532 # If you want to use ipv4 in ipv6 i.e. something like [::ffff:192.168.22.11] 533 # you have to set disable-ip-verification in the global section. 534 address ipv6 [fd0c:39f4:f135:305:230:48ff:fe63:5c9a]:7789; 535 } 536 537 on alf { 538 address ipv6 [fd0c:39f4:f135:305:230:48ff:fe63:5ebe]:7789; 539 } 540 } 541 542 resource r2 { 543 protocol C; 544 545 startup { wfc-timeout 0; degr-wfc-timeout 120; } 546 disk { on-io-error detach; } 547 net { timeout 60; connect-int 10; ping-int 10; 548 max-buffers 2048; max-epoch-size 2048; } 549 syncer { rate 4M; } # sync when r0 and r1 are finished syncing. 550 on amd { 551 address 192.168.22.11:7790; 552 disk /dev/hde7; device /dev/drbd2; meta-disk "internal"; 553 } 554 on alf { 555 device "/dev/drbd2"; disk "/dev/hdc7"; meta-disk "internal"; 556 address 192.168.22.12:7790; 557 } 558 } 559 560 resource r3 { 561 protocol C; 562 device /dev/drbd3; 563 564 on amd { 565 disk /dev/hde8; 566 address 192.168.22.11:7791; 567 meta-disk internal; 568 } 569 on alf { 570 disk /dev/hdc8; 571 address 192.168.22.12:7791; 572 meta-disk /some/where[8]; 573 } 574 } 575 576 resource r4 { 577 protocol C; 578 device minor 4; 579 580 on amd { 581 disk /dev/hde9; 582 address 192.168.22.11:7792;

583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643

meta-disk internal; } on alf { disk /dev/hdc9; address 192.168.22.12:7792; meta-disk /some/where[9]; } } resource lower-alice-bob { protocol C; on alice { device /dev/drbd4; disk /dev/hde9; address 192.168.23.11:7791; meta-disk internal; } on bob { device /dev/drbd4; disk /dev/hdc9; address 192.168.23.12:7791; meta-disk /some/where[8]; } } resource lower-charly-daisy { protocol C; on charly { device /dev/drbd4; disk /dev/hde9; address 192.168.23.13:7791; meta-disk internal; } on daisy { device /dev/drbd4; disk /dev/hdc9; address 192.168.23.14:7791; meta-disk /some/where[8]; } } resource upper { protocol A; stacked-on-top-of lower-alice-bob { device /dev/drbd10; address 127.0.0.1:1230; proxy on alic bob { inside 127.0.0.1:1234; outside 192.168.23.21:7791; } } stacked-on-top-of lower-charly-daisy { device /dev/drbd10; address 127.0.0.1:1230; proxy on charly daisy { inside 127.0.0.1:1234; outside 192.168.23.22:7791; } } }

Вам также может понравиться