OVH Community, your new community space.

PROXMOX + LSI MegaRAID 9271-4i + 3x SSD ... Houston, we've had a problem here !!!


Zento
31/08/2016, 09:08
Hola.

Nosotros también hacíamos los backup con Proxmox sobre el NFS que nos dan, pero también hemos tenido problemas continuamente con ellos: timeouts, acceso a disco que se eterniza... Con varias incidencias puestas con soporte que nunca llegan a nada, porque que si tenemos el servidor en GRA y los backup en RBX y es normal que haga eso, que si no hay ancho de banda garantizado. En fin, que pasan. Al final optamos por hacerlos sobre el disco y enviarlos a Amazon S3.

Creo que al final estamos mezclando dos temas en este hilo...

ferminator
28/08/2016, 16:12
Hola Ruben, si quieres contactame y lo comentamos.

Salu2.
Fermin

rubendob
25/08/2016, 09:31
HOLA!

buscando info sobre este problema di con este hilo. Yo he venido sufriendo el mismo problema con una controladora LSI MegaRAID 9271-4i con un RAID 5 de 3 discos y particiones EXT4 + SWAP.

Los de OVH al parecer encontraron discos defectuosos, los reemplazaron, cambiaron la controladora, dió igual.

Utilizo un Centos 6.8 custom con openvz + r1soft y estoy intentando investigar si hay algo relacionado.

Sobre lo que comenta el compañero, en mi caso, los backups NFS están deshabilitados porque como digo, tengo la solución R1Soft.

Pero el error es bastante random....no tengo pistas, luego comentaré mis especificaciones por si ayudo a alguien con el mismo problema.

Saludos

ferminator
02/01/2015, 18:23
Hola de nuevo !

Bueno, pues parece que al final funciona mucho mejor con estos parametros para el NFS Backup de OVH, con el PROXMOX v3.3.,

Alguien tiene otras experiencias con parametros en el NFS Backup de OVH y Proxmox ?

Salu2
Fermin.



- Parametros para el NFS BACKUP de OVH.

# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0

nfs: nfs1
disable
path /mnt/pve/nfs1
server ftpback-rbx6-8.ovh.net
export /export/ftpbackup/ns333214.ip-37-187-159.eu
options vers=4,rw,udp,rsize=32768,wsize=32768,hard,intr,no atime,timeo=60,retrans=5,async,nolock,noacl
content backup
nodes leopard
maxfiles 2


Quedando asi montado :

# cat /proc/mounts | grep ftpback

ftpback-rbxX-X.ovh.net:/export/ftpbackup/nsXXXXXX.ip-XX-XX-XX.eu/ /mnt/pve/nfs1 nfs4 rw, noatime, vers=4, rsize=32768, wsize=32768, namlen=255,hard,proto=udp,port=0,timeo=60,retrans= 5,sec=sys,clientaddr=XX.XX.XX.XX,minorversion=0,lo cal_lock=none,addr=XX.XX.XX.XX 0 0


- Y ... limitando a 80Mbps el VZDump.

# cat /etc/vzdump.conf

bwlimit1920
size:2048

ferminator
31/12/2014, 14:01
Hola txavi, ... gracias por contestar !

Creo que ya tengo 'pillado' el problema, ... parece que todo el problema viene por el NFS Backup de 1 TB que monto en OVH para hacer los backups de las VPS.

No he usado nunca VMWare ESXi, porque no permite OpenVz, osea para-virtualizacion de contenedores ( solo linux ), son muy eficientes y rapidos, quizas lo mejor para virtualizar VPS Linux, creo que VMWare solo soporta virtualizaciones completas,
PROXMOX soporta OpenVz ( para-virtualizacion ) y KVM ( full virtualizacion ).
Yo con un buen 'cacharro' en Proxmox he movido muy bien mas de 50 VPS Linux es una gozada, funcionan de lujo !

Ver ...
http://www.proxmox.com/proxmox-ve/comparison
http://es.wikipedia.org/wiki/OpenVZ
http://openvz.org/Main_Page

Volviendo al problema del hilo ... como sabes ahora OVH con cada servidor EG/SP/MG suministra un BACKUP NFS de 500 GB, nosotros lo ampliamos a 1 TB, para hacer los backups alli, bueno pues parece que da TIME OUT's en la escritura de la unidad NFS y demora todo el JOURNALING sobre el SDA ( RAID5 ) que es donde esta montado el NFS y parece que termina petando todo.

Creo que quizas el NFS de OVH pueda estar SATURADO por que algunas veces escribe en el a 100Mbps ... y otras veces a 5 Mbps o menos., y como sabes la controladora esta LSI MegaRAID 9271-4i ... funciona como un rayo!, e igual se empalagan los accesos simultaneos
al NFS desde varios servidores.

Parece que los datos rsize y wsize de montaje del NFS son criticos, les preguntare a la gente de OVH si saben algo al respecto.

salu2.
Fermin
admin @ open-matrix.es

---------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------

# cat /etc/pve/storage.cfg

dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0

nfs: nfs1
disable
path /mnt/pve/nfs1
server ftpback-rbx6-8.ovh.net
export /export/ftpbackup/nsXXXXXXX.ip-AA-BB-CC.eu
options vers=3 << ------------- añadir aqui las opciones de montaje adicionales
content images,backup
nodes leopard
maxfiles 2


# cat /proc/mounts | grep ftpback

ftpback-rbxZ-Z.ovh.net:/export/ftpbackup/nsXXXXXX.ip-AA-BB-CC.eu /mnt/pve/nfs1 nfs rw, relatime, vers=3, rsize=1048576,wsize=1048576, namlen=255, hard, proto=tcp, timeo=600, retrans=2, sec=sys, mountaddr=AA.BB.CC.DD, mountvers=3, mountport=36744, mountproto=udp, local_lock=none, addr=AA.BB.CC.DD 0 0


------------------syslog -----------------------------
Dec 26 02:25:08 leopard pvestatd[79734]: WARNING: command 'df -P -B 1 /mnt/pve/nfs1' failed: got timeout
Dec 26 02:25:15 leopard pvestatd[79734]: WARNING: command 'df -P -B 1 /mnt/pve/nfs1' failed: got timeout
Dec 26 02:25:45 leopard pvestatd[79734]: WARNING: command 'df -P -B 1 /mnt/pve/nfs1' failed: got timeout
Dec 26 02:26:25 leopard pvestatd[79734]: WARNING: command 'df -P -B 1 /mnt/pve/nfs1' failed: got timeout
Dec 26 02:26:38 leopard pvestatd[79734]: WARNING: command 'df -P -B 1 /mnt/pve/nfs1' failed: got timeout
Dec 26 02:26:45 leopard pvestatd[79734]: WARNING: command 'df -P -B 1 /mnt/pve/nfs1' failed: got timeout
---------------------------------------------------------

txavi
31/12/2014, 12:25
Hola ferminator,

Yo tengo esa controladora con 3 SSD en RAID 5, pero uso ESXi... aún así, me he peleado de buenas con esa controladora... Yo finalmente bajo un MV linux no conseguí configurar el MegaRAID Storage Manager (MSM) en modo SERVIDOR<<-->>MV, sólo lo conseguí con una MV con Windows 2008, pero te recomendaría intentar de instalar el MSM para poder indagar más sobre los posibles errores con la controladora pero al menos desde un interfaz gráfico, a ver si es un problema de la batería, la caché, etc...

http://www.lsi.com/products/raid-con....aspx#tab/tab4

A ver si alguien con conocimientos en PROXMOX puede ayudarte :-)

Saludos y buen año!

ferminator
30/12/2014, 21:40
Hola a tod@s.!

Antes que nada, en las fechas que estamos desearos a todos un Feliz y pro$pero año nuevo !!!
;-)

Bueno paso a describiros el problemon que tengo a ver si alguien se ha topado con algo parecido:

En 'open-matrix.es' disponemos de varios servidores bien 'GORDITOS', que utilizamos para dar servicio de hosting, VPS's y
aplicaciones dedicadas a empresas en configuraciones potentes con Xeon + LSI 9271-4i + RAID5 SSD ... etc.,

El problema pasa SOLO con este servidor, los demas funcionan OK!, ... y tienen configuraciones casi identicas,
este que tiene el problema tambien estuvo funcionando sin ninguna pega durante +100 dias, hasta que empezo a 'petar',
y dar vomitonas el KERNEL ... ahora me explico.


::MAQUINA::
-------------
Servidor Enterprise SP-64 Xeon E5-1620v2 + RAM 64 GB + LSI MegaRAID 9271-4i + RAID5 (3x 480 GB SDD) + 1 TB NFS Backup


::SOFTWARE::
--------------
Proxmox VE 3.3 ( Kernel 2.6.32-34-pve SMP Dec 19 07:42:04 CET 2014 )
Actualmente corren un total de unas 36 VPS ( OpenVz - Ubuntu/CentOS/Debian - Virtualmin/Plesk )


::PROBLEMA::
------------
De repente se pone el RAID5 de la Controladora MegaRAID LSI 9271-4i en modo SOLO LECTURA, rechazando cualquier escritura
sobre los discos y se bloquea todo, esto pasa antes de 48 horas aproximadamente, al quedar en modo READ ONLY no escribe en los
logs y he tenido que cazar al vuelo, por SSH y KVM-IP los errores por lo que es bastante dificil de trazar el problema.

Nota:
Todas las pruebas HARDWARE en modo RECUE-Pro de OVH han pasado satisfactoriamente y el SMARTCTL de los discos
SSD y FSCK del sistema de fichero estan OK.


Que opinais ??

Un cordial salu2.



-----------------------------------------------------------------------------------------------------------------------------------
----- SYSLOG - CRASH -----------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------------------

Dec 29 23:28:59 leopard kernel: EXT4-fs warning (device dm-0): ext4_end_bio: I/O error writing to inode 5926096
Dec 29 23:28:59 leopard kernel: sd 0:2:0:0: rejecting I/O to offline device
Dec 29 23:28:59 leopard kernel: end_request: I/O error, dev sda, sector 0
Dec 29 23:28:59 leopard kernel: sd 0:2:0:0: rejecting I/O to offline device
Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: [sda] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: [sda] CDB: Write(10): 2a 00 47 d6 14 70 00 00 10 00
Dec 29 23:29:00 leopard kernel: end_request: I/O error, dev sda, sector 1205212272
Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: [sda] Unhandled error code
Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: [sda] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: [sda] CDB: Write(10): 2a 00 46 e8 78 d8 00 00 08 00
Dec 29 23:29:00 leopard kernel: JBD2: Detected IO errors while flushing file data on dm-0-8
Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: rejecting I/O to offline device
Dec 29 23:29:00 leopard kernel: Aborting journal on device dm-0-8.
Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: rejecting I/O to offline device
Dec 29 23:29:00 leopard kernel: EXT4-fs error (device dm-0): ext4_journal_start_sb: Detected aborted journal
Dec 29 23:29:00 leopard kernel: JBD2: I/O error detected when updating journal superblock for dm-0-8.
Dec 29 23:29:00 leopard kernel: EXT4-fs (dm-0): Remounting filesystem read-only
Dec 29 23:29:00 leopard kernel: EXT4-fs error (device dm-0) in ext4_new_inode: Journal has aborted
Dec 29 23:29:00 leopard kernel: EXT4-fs error (device dm-0) in ext4_reserve_inode_write: Journal has aborted
Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: rejecting I/O to offline device
Dec 29 23:29:00 leopard kernel: EXT4-fs error (device dm-0) in ext4_orphan_add: Journal has aborted
Dec 29 23:29:00 leopard kernel: EXT4-fs error (device dm-0) in ext4_new_inode: Journal has aborted

Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: rejecting I/O to offline device
Dec 29 23:29:00 leopard kernel: EXT4-fs error (device dm-0) in ext4_reserve_inode_write: Journal has aborted
Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: rejecting I/O to offline device

Dec 29 23:29:00 leopard kernel: end_request: I/O error, dev sda, sector 29771464
Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: rejecting I/O to offline device
Dec 29 23:29:00 leopard kernel: end_request: I/O error, dev sda, sector 143513600

Dec 29 23:29:00 leopard kernel: EXT4-fs warning (device dm-0): ext4_end_bio: I/O error writing to inode 5920323
Dec 29 23:29:00 leopard kernel: end_request: I/O error, dev sda, sector 698531976

Dec 29 23:29:00 leopard kernel: EXT4-fs warning (device dm-0): ext4_end_bio: I/O error writing to inode 18756785
Dec 29 23:29:00 leopard kernel: sd 0:2:0:0: rejecting I/O to offline device
Dec 29 23:29:00 leopard kernel: end_request: I/O error, dev sda, sector 628082184
Dec 29 23:29:00 leopard kernel: EXT4-fs error (device dm-0) in ext4_orphan_add: Journal has aborted

Message from syslogd@leopard at Dec 29 23:29:02 ...
kernel:journal commit I/O error
Message from syslogd@leopard at Dec 29 23:29:02 ...
kernel:journal commit I/O error
Message from syslogd@leopard at Dec 29 23:29:02 ...
kernel:journal commit I/O error

Dec 30 17:46:52 leopard kernel: INFO: task jbd2/sda1-8:462 blocked for more than 120 seconds.

Dec 30 17:46:52 leopard kernel: Tainted: G --------------- T 2.6.32-31-pve #1
Dec 30 17:46:52 leopard kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 30 17:46:52 leopard kernel: jbd2/sda1-8 D ffff881073c2e980 0 462 2 0 0x00000000
Dec 30 17:46:52 leopard kernel: ffff881073f0fc10 0000000000000046 ffff881073f0fba0 ffffffff81086871
Dec 30 17:46:52 leopard kernel: ffff881073f0fb80 ffffffff81014d79 ffff881073f0fbc0 ffffffff810aaa44
Dec 30 17:46:52 leopard kernel: ffff88107373aeb8 ffff88107952c760 ffff881073f0ffd8 ffff881073f0ffd8

Dec 30 17:46:52 leopard kernel: Call Trace:
Dec 30 17:46:52 leopard kernel: [] ? del_timer+0x81/0xe0
Dec 30 17:46:52 leopard kernel: [] ? read_tsc+0x9/0x20
Dec 30 17:46:52 leopard kernel: [] ? ktime_get_ts+0xb4/0xf0
Dec 30 17:46:52 leopard kernel: [] ? sync_buffer+0x0/0x50
Dec 30 17:46:52 leopard kernel: [] io_schedule+0x73/0xc0
Dec 30 17:46:52 leopard kernel: [] sync_buffer+0x40/0x50
Dec 30 17:46:52 leopard kernel: [] __wait_on_bit+0x60/0x90
Dec 30 17:46:52 leopard kernel: [] ? sync_buffer+0x0/0x50
Dec 30 17:46:52 leopard kernel: [] out_of_line_wait_on_bit+0x7c/0x90
Dec 30 17:46:52 leopard kernel: [] ? wake_bit_function+0x0/0x50
Dec 30 17:46:52 leopard kernel: [] __wait_on_buffer+0x26/0x30
Dec 30 17:46:52 leopard kernel: [] jbd2_journal_commit_transaction+0xa27/0x1460 [jbd2]
Dec 30 17:46:52 leopard kernel: [] ? __switch_to+0xc2/0x2f0
Dec 30 17:46:52 leopard kernel: [] ? lock_timer_base.isra.52+0x38/0x70
Dec 30 17:46:52 leopard kernel: [] kjournald2+0xb8/0x200 [jbd2]
Dec 30 17:46:52 leopard kernel: [] ? autoremove_wake_function+0x0/0x40
Dec 30 17:46:52 leopard kernel: [] ? kjournald2+0x0/0x200 [jbd2]
Dec 30 17:46:52 leopard kernel: [] kthread+0x88/0x90
Dec 30 17:46:52 leopard kernel: [] ? __switch_to+0xc2/0x2f0
Dec 30 17:46:52 leopard kernel: [] child_rip+0xa/0x20
Dec 30 17:46:52 leopard kernel: [] ? kthread+0x0/0x90
Dec 30 17:46:52 leopard kernel: [] ? child_rip+0x0/0x20

Dec 30 17:46:52 leopard kernel: INFO: task flush-8:0:1187 blocked for more than 120 seconds.

Dec 30 17:46:52 leopard kernel: Tainted: G --------------- T 2.6.32-31-pve #1
Dec 30 17:46:52 leopard kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 30 17:46:52 leopard kernel: flush-8:0 D ffff881077674500 0 1187 2 0 0x00000000
Dec 30 17:46:52 leopard kernel: ffff881075993570 0000000000000046 0000000000000000 00000000000800f1
Dec 30 17:46:52 leopard kernel: ffff881075993510 ffffffff811e08c6 ffff8810759934f0 0000000000000000
Dec 30 17:46:52 leopard kernel: 000000000001bc40 0000000000001000 ffff881075993fd8 ffff881075993fd8

Dec 30 17:46:52 leopard kernel: Call Trace:
Dec 30 17:46:52 leopard kernel: [] ? __find_get_block_slow+0xb6/0x140
Dec 30 17:46:52 leopard kernel: [] ? prepare_to_wait+0x4e/0x80
Dec 30 17:46:52 leopard kernel: [] do_get_write_access+0x26d/0x480 [jbd2]
Dec 30 17:46:52 leopard kernel: [] ? wake_bit_function+0x0/0x50
Dec 30 17:46:52 leopard kernel: [] jbd2_journal_get_write_access+0x30/0x50 [jbd2]
Dec 30 17:46:52 leopard kernel: [] __ext4_journal_get_write_access+0x36/0x80 [ext4]
Dec 30 17:46:52 leopard kernel: [] ext4_reserve_inode_write+0x73/0xa0 [ext4]
Dec 30 17:46:52 leopard kernel: [] ext4_mark_inode_dirty+0x49/0x1a0 [ext4]
Dec 30 17:46:52 leopard kernel: [] ext4_ext_dirty.isra.18+0x2d/0x40 [ext4]
Dec 30 17:46:52 leopard kernel: [] ext4_ext_insert_extent+0x5f8/0x10e0 [ext4]
Dec 30 17:46:52 leopard kernel: [] ? ext4_ext_find_extent+0x295/0x310 [ext4]
Dec 30 17:46:52 leopard kernel: [] ext4_ext_get_blocks+0x325/0x13a0 [ext4]
Dec 30 17:46:52 leopard kernel: [] ? submit_bio+0x83/0x190
Dec 30 17:46:52 leopard kernel: [] ? __bio_clone+0x26/0x70
Dec 30 17:46:52 leopard kernel: [] ext4_get_blocks+0x1c5/0x220 [ext4]
Dec 30 17:46:52 leopard kernel: [] mpage_da_map_and_submit+0xa7/0x3a0 [ext4]
Dec 30 17:46:52 leopard kernel: [] ? jbd2_journal_start+0xc0/0x100 [jbd2]
Dec 30 17:46:52 leopard kernel: [] ext4_da_writepages+0x2fb/0x630 [ext4]
Dec 30 17:46:52 leopard kernel: [] ? __writepage+0x0/0x40
Dec 30 17:46:52 leopard kernel: [] do_writepages+0x1f/0x50
Dec 30 17:46:52 leopard kernel: [] __writeback_single_inode+0xa6/0x2a0
Dec 30 17:46:52 leopard kernel: [] writeback_single_inode+0x3a/0xc0
Dec 30 17:46:52 leopard kernel: [] ? iput+0x30/0x70
Dec 30 17:46:52 leopard kernel: [] writeback_sb_inodes+0xf6/0x1e0
Dec 30 17:46:52 leopard kernel: [] writeback_inodes_wb+0xff/0x170
Dec 30 17:46:52 leopard kernel: [] wb_writeback+0x2a3/0x3f0
Dec 30 17:46:52 leopard kernel: [] ? thread_return+0xbc/0x870
Dec 30 17:46:52 leopard kernel: [] wb_do_writeback+0x191/0x250
Dec 30 17:46:52 leopard kernel: [] bdi_writeback_task+0x8e/0x1e0
Dec 30 17:46:52 leopard kernel: [] bdi_start_fn+0x92/0x100
Dec 30 17:46:52 leopard kernel: [] ? bdi_start_fn+0x0/0x100
Dec 30 17:46:52 leopard kernel: [] kthread+0x88/0x90
Dec 30 17:46:52 leopard kernel: [] ? __switch_to+0xc2/0x2f0
Dec 30 17:46:52 leopard kernel: [] child_rip+0xa/0x20
Dec 30 17:46:52 leopard kernel: [] ? kthread+0x0/0x90
Dec 30 17:46:52 leopard kernel: [] ? child_rip+0x0/0x20

Dec 30 17:46:52 leopard kernel: INFO: task jbd2/dm-0-8:1757 blocked for more than 120 seconds.

Dec 30 17:46:52 leopard kernel: Tainted: G --------------- T 2.6.32-31-pve #1
Dec 30 17:46:52 leopard kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 30 17:46:52 leopard kernel: jbd2/dm-0-8 D ffff881071e3e380 0 1757 2 0 0x00000000
Dec 30 17:46:52 leopard kernel: ffff8810759fbb40 0000000000000046 ffff8810759fbaf0 ffffffff8143b8ec
Dec 30 17:46:52 leopard kernel: ffff8810759fbab0 ffffffff81014d79 ffff8810759fbaf0 0000000000000282
Dec 30 17:46:52 leopard kernel: ffff8810759fbaf0 00000000fba8805e ffff8810759fbfd8 ffff8810759fbfd8
Dec 30 17:46:52 leopard kernel: Call Trace:
Dec 30 17:46:52 leopard kernel: [] ? dm_table_unplug_all+0x5c/0x100
Dec 30 17:46:52 leopard kernel: [] ? read_tsc+0x9/0x20
Dec 30 17:46:52 leopard kernel: [] ? sync_page+0x0/0x50
Dec 30 17:46:52 leopard kernel: [] io_schedule+0x73/0xc0
Dec 30 17:46:52 leopard kernel: [] sync_page+0x3b/0x50
Dec 30 17:46:52 leopard kernel: [] __wait_on_bit+0x60/0x90
Dec 30 17:46:52 leopard kernel: [] wait_on_page_bit+0x80/0x90
Dec 30 17:46:52 leopard kernel: [] ? wake_bit_function+0x0/0x50
Dec 30 17:46:52 leopard kernel: [] wait_on_page_writeback_range.part.36+0xea/0x180
Dec 30 17:46:52 leopard kernel: [] ? submit_bio+0x83/0x190
Dec 30 17:46:52 leopard kernel: [] wait_on_page_writeback_range+0x15/0x20
Dec 30 17:46:52 leopard kernel: [] filemap_fdatawait+0x2f/0x40
Dec 30 17:46:52 leopard kernel: [] jbd2_journal_commit_transaction+0x788/0x1460 [jbd2]
Dec 30 17:46:52 leopard kernel: [] ? __switch_to+0xc2/0x2f0
Dec 30 17:46:52 leopard kernel: [] ? lock_timer_base.isra.52+0x38/0x70
Dec 30 17:46:52 leopard kernel: [] kjournald2+0xb8/0x200 [jbd2]
Dec 30 17:46:52 leopard kernel: [] ? autoremove_wake_function+0x0/0x40
Dec 30 17:46:52 leopard kernel: [] ? kjournald2+0x0/0x200 [jbd2]
Dec 30 17:46:52 leopard kernel: [] kthread+0x88/0x90
Dec 30 17:46:52 leopard kernel: [] ? __switch_to+0xc2/0x2f0
Dec 30 17:46:52 leopard kernel: [] child_rip+0xa/0x20
Dec 30 17:46:52 leopard kernel: [] ? kthread+0x0/0x90
Dec 30 17:46:52 leopard kernel: [] ? child_rip+0x0/0x20


------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------


# cat /etc/fstab

#
/dev/sda1 / ext4 errors=remount-ro 0 1
/dev/sda2 swap swap defaults 0 0
/dev/pve/data /var/lib/vz ext4 noatime,relatime,nodelalloc 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0



-->> DETALLE INFORMACION PROXMOX:


# pveversion -v

proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-31-pve: 2.6.32-132
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1



-->> DETALLE MODULO EN EL KERNEL DE LA CONTROLADORA :


# modinfo megaraid_sas

filename: /lib/modules/2.6.32-34-pve/kernel/drivers/scsi/megaraid/megaraid_sas.ko
description: LSI MegaRAID SAS Driver
author: megaraidlinux@lsi.com
version: 06.703.11.00
license: GPL
srcversion: D459B6A575926A72EB8FFAB
alias: pci:v00001000d0000005Fsv*sd*bc*sc*i*
alias: pci:v00001000d0000005Dsv*sd*bc*sc*i*
alias: pci:v00001000d0000005Bsv*sd*bc*sc*i*
alias: pci:v00001028d00000015sv*sd*bc*sc*i*
alias: pci:v00001000d00000413sv*sd*bc*sc*i*
alias: pci:v00001000d00000071sv*sd*bc*sc*i*
alias: pci:v00001000d00000073sv*sd*bc*sc*i*
alias: pci:v00001000d00000079sv*sd*bc*sc*i*
alias: pci:v00001000d00000078sv*sd*bc*sc*i*
alias: pci:v00001000d0000007Csv*sd*bc*sc*i*
alias: pci:v00001000d00000060sv*sd*bc*sc*i*
alias: pci:v00001000d00000411sv*sd*bc*sc*i*
depends:
vermagic: 2.6.32-34-pve SMP mod_unload modversions
parm: max_sectors:Maximum number of sectors per IO command (int)
parm: msix_disableisable MSI-X interrupt handling. Default: 0 (int)
parm: msix_vectors:MSI-X max vector count. Default: Set by FW (int)
parm: throttlequeuedepth:Adapter queue depth when throttled due to I/O timeout. Default: 16 (int)
parm: resetwaittime:Wait time in seconds after I/O timeout before resetting adapter. Default: 180 (int)
parm: crashdump_enable:Firmware Crash dump feature enable/disbale Default: enable(1) (int)




-->> DETALLE INFORMACION CONTROLADORA :


# MegaCli -AdpAllInfo -aALL

Adapter #0

================================================== ============================
Versions
================
Product Name : LSI MegaRAID SAS 9271-4i
Serial No : SV32510176
FW Package Build : 23.28.0-0015

Mfg. Data
================
Mfg. Date : 06/18/13
Rework Date : 00/00/00
Revision No : 07C
Battery FRU : N/A

Image Versions in Flash:
================
BIOS Version : 5.46.02.1_4.16.08.00_0x06060A03
WebBIOS Version : 6.1-72-e_72-01-Rel
Preboot CLI Version : 05.07-00:#%00011
FW Version : 3.400.45-3507
Boot Block Version : 2.05.00.00-0010

Pending Images in Flash
================
None

PCI Info
================
Vendor Id : 1000
Device Id : 005b
SubVendorId : 1000
SubDeviceId : 9276

Host Interface : PCIE

Number of Frontend Port: 0
Device Interface : PCIE

Number of Backend Port: 8
Port : Address
0 4433221101000000
1 4433221102000000
2 4433221103000000
3 0000000000000000
4 0000000000000000
5 0000000000000000
6 0000000000000000
7 0000000000000000

HW Configuration
================
SAS Address : 500605b006b879f0
BBU : Present
Alarm : Present
NVRAM : Present
Serial Debugger : Present
Memory : Present
Flash : Present
Memory Size : 1024MB
TPM : Absent

Settings
================
Current Time : 18:18:27 12/30, 2014
Predictive Fail Poll Interval : 300sec
Interrupt Throttle Active Count : 16
Interrupt Throttle Completion : 50us
Rebuild Rate : 30%
PR Rate : 30%
Resynch Rate : 30%
Check Consistency Rate : 30%
Reconstruction Rate : 30%
Cache Flush Interval : 4s
Max Drives to Spinup at One Time : 2
Delay Among Spinup Groups : 12s
Physical Drive Coercion Mode : Disabled
Cluster Mode : Disabled
Alarm : Enabled
Auto Rebuild : Enabled
Battery Warning : Enabled
Ecc Bucket Size : 15
Ecc Bucket Leak Rate : 1440 Minutes
Restore HotSpare on Insertion : Disabled
Expose Enclosure Devices : Enabled
Maintain PD Fail History : Enabled
Host Request Reordering : Enabled
Auto Detect BackPlane Enabled : SGPIO/i2c SEP
Load Balance Mode : Auto
Use FDE Only : No
Security Key Assigned : No
Security Key Failed : No
Security Key Not Backedup : No

Any Offline VD Cache Preserved : No

Capabilities
================
RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, SRL 3 supported
Supported Drives : SAS, SATA

Allowed Mixing:

Mix in Enclosure Allowed
Mix of SAS/SATA of HDD type in VD Allowed

Status
================
ECC Bucket Count : 0

Limitations
================
Max Arms Per VD : 32
Max Spans Per VD : 8
Max Arrays : 128
Max Number of VDs : 64
Max Parallel Commands : 1008
Max SGE Count : 60
Max Data Transfer Size : 8192 sectors
Max Strips PerIO : 42
Min Stripe Size : 8 KB
Max Stripe Size : 1.0 MB

Device Present
================
Virtual Drives : 1
Degraded : 0
Offline : 0
Physical Devices : 4
Disks : 3
Critical Disks : 0
Failed Disks : 0

Supported Adapter Operations
================
Rebuild Rate : Yes
CC Rate : Yes
BGI Rate : Yes
Reconstruct Rate : Yes
Patrol Read Rate : Yes
Alarm Control : Yes
Cluster Support : No
BBU : Yes
Spanning : Yes
Dedicated Hot Spare : Yes
Revertible Hot Spares : Yes
Foreign Config Import : Yes
Self Diagnostic : Yes
Allow Mixed Redundancy on Array : No
Global Hot Spares : Yes
Deny SCSI Passthrough : No
Deny SMP Passthrough : No
Deny STP Passthrough : No
Support Security : No

Supported VD Operations
================
Read Policy : Yes
Write Policy : Yes
IO Policy : Yes
Access Policy : Yes
Disk Cache Policy : Yes
Reconstruction : Yes
Deny Locate : No
Deny CC : No
Allow Ctrl Encryption: No

Supported PD Operations
================
Force Online : Yes
Force Offline : Yes
Force Rebuild : Yes
Deny Force Failed : No
Deny Force Good/Bad : No
Deny Missing Replace : No
Deny Clear : No
Deny Locate : No
Disable Copyback : No
Enable Copyback on SMART : No
Enable Copyback to SSD on SMART Error : Yes
Enable SSD Patrol Read : No
Enable Spin Down of UnConfigured Drives : Yes

Error Counters
================
Memory Correctable Errors : 0
Memory Uncorrectable Errors : 0

Cluster Information
================
Cluster Permitted : No
Cluster Active : No

Default Settings
================
Phy Polarity : 0
Phy PolaritySplit : 0
Background Rate : 30
Stripe Size : 256kB
Flush Time : 4 seconds
Write Policy : WB
Read Policy : Adaptive
Cache When BBU Bad : Disabled
Cached IO : No
SMART Mode : Mode 6
Alarm Disable : Yes
Coercion Mode : None
ZCR Config : Unknown
Dirty LED Shows Drive Activity : No
BIOS Continue on Error : No
Spin Down Mode : None
Allowed Device Type : SAS/SATA Mix
Allow Mix in Enclosure : Yes
Allow HDD SAS/SATA Mix in VD : Yes
Allow SSD SAS/SATA Mix in VD : No
Allow HDD/SSD Mix in VD : No
Allow SATA in Cluster : No
Max Chained Enclosures : 16
Disable Ctrl-R : Yes
Enable Web BIOS : Yes
Direct PD Mapping : No
BIOS Enumerate VDs : Yes
Restore Hot Spare on Insertion : No
Expose Enclosure Devices : Yes
Maintain PD Fail History : Yes
Disable Puncturing : No
Zero Based Enclosure Enumeration : No
PreBoot CLI Enabled : Yes
LED Show Drive Activity : Yes
Cluster Disable : Yes
SAS Disable : No
Auto Detect BackPlane Enable : SGPIO/i2c SEP
Use FDE Only : No
Enable Led Header : No
Delay during POST : 0

Exit Code: 0x00