14 Mayıs 2019 Salı
Mongodb Change secondary priority(avoid becoming primary)
Hi,
You may have mongodb replica set and also you may want to set ones to avoid becoming primary.
For example we have 1 replica set. 1 primary(Roma),1 local secondary(Roma) and 1 remote secondary(Milano). If there is a problem with primary Roma or Milano may be primary after elections.But we want that Milano site is disaster site and it shouldn't be in elections.
For this request,our need to set priority to 0
1)cfg = rs.conf()
2)cfg.members[2].priority = 0
3) rs.reconfig(cfg)
Attention:
cfg.members[2].priority------------------>We set 2nd member priority,it may be different in your replica set
MongoDB is pretty:)
Failed: necodb.contents: error creating collection necodb.contents: error running create command: BSON field 'OperationSessionInfo.create' is a duplicate field
Hi,
I have a migration project.We will migrate from mongo 2.4.12 to mongo 4.0.6.
We are using community version. So we have to choose mongodump method.
There is no error while exporting data.But import is failed.
Problem:
-----------
##############################################################################
Failed: necodb.contents: error creating collection necodb.contents: error running create command: BSON field 'OperationSessionInfo.create' is a duplicate field
##############################################################################
Solution:
-----------
1)use 2.4.12 mongobackup on the 2.4 database
mongodump --db necodb--out /mondirec/backup/
2)delete or move all .json files from the outputted dump directory
3)use 4.0.6 mongorestore on the remaining .bson files
mongorestore --db necodb --drop /mondirec/backup/necodb
Thanks for dba.exchange.com for this solution.
Reference:
https://dba.stackexchange.com/questions/201827/failed-to-import-a-mongodb-database-with-duplicate-fields
9 Mayıs 2018 Çarşamba
Exadata Disk Scrubbing Problem
Problem:
--------
We hit a problem previous weeks related Exadata storage. In fact not related to Exadata but the impact is big.
We are using Attunity Replicate for a CDC tool and Attunity Replicate CDC method is different from goldengate.
As you know Goldengate uses the stream for reading redo logs and archives. But Attunity Replicate is different.
(I will use shortcut AR for Attunity replicate)
AR is reading online redo logs and archive logs with Oracle DBMS package so it may be affected by disk IO or anything else.
Solution:
-----------
Another story that;
Disk scrubbing is a new feature that introduced in Oracle 11.2.0.4 and Exadata 11.2.3.3.0 storage.
Disk scrubbing checks the disks for the corruption risks.
In default configuration,checks every 2 weeks.(hardDiskScrubInterval=biweekly)
Let me show;
It is the default configuration
CellCLI> list cell attributes name,hardDiskScrubInterval
x6celadm05 biweekly
Till this step, everything is ok, Oracle protects our disks and data. But every cells start scrubbing same time, you may have IO problem.
Our database has a lot of transactions, but we don't see any impact of this scrubbing, till AR went slow done(latency is growing) while scrubbing is continues.
After we found the scrubbing information from cell alert logs, we decide to start scrubbing different times. Then we set a different start time
for every cell.
For example;
One cell disk scrubbing takes 44 hours and we have 7 cells. So I didn't change the interval(biweekly). After Cell 1 disk scrubbing is finished,
cell 2 will start, after cell 2 is finished cell 3 will start etc. So in this method, only one cell will do scrubbing in a meantime.
dcli -g cell_group -l root cellcli -e "list cell attributes name,hardDiskScrubInterval,hardDiskScrubStartTime"
x6celadm07: x6celadm07 biweekly 2018-04-22T05:00:00+03:00
x6celadm06: x6celadm06 biweekly 2018-04-20T08:00:00+03:00
x6celadm05: x6celadm05 biweekly 2018-04-18T11:00:00+03:00
x6celadm04: x6celadm04 biweekly 2018-04-16T14:00:00+03:00
x6celadm03: x6celadm03 biweekly 2018-04-14T17:00:00+03:00
x6celadm02: x6celadm02 biweekly 2018-04-12T20:00:00+03:00
x6celadm01: x6celadm01 biweekly 2018-04-10T23:00:00+03:00
After this implementation, AR(attunnity replicate) latencies was going down.
--------
We hit a problem previous weeks related Exadata storage. In fact not related to Exadata but the impact is big.
We are using Attunity Replicate for a CDC tool and Attunity Replicate CDC method is different from goldengate.
As you know Goldengate uses the stream for reading redo logs and archives. But Attunity Replicate is different.
(I will use shortcut AR for Attunity replicate)
AR is reading online redo logs and archive logs with Oracle DBMS package so it may be affected by disk IO or anything else.
Solution:
-----------
Another story that;
Disk scrubbing is a new feature that introduced in Oracle 11.2.0.4 and Exadata 11.2.3.3.0 storage.
Disk scrubbing checks the disks for the corruption risks.
In default configuration,checks every 2 weeks.(hardDiskScrubInterval=biweekly)
Let me show;
It is the default configuration
CellCLI> list cell attributes name,hardDiskScrubInterval
x6celadm05 biweekly
Till this step, everything is ok, Oracle protects our disks and data. But every cells start scrubbing same time, you may have IO problem.
Our database has a lot of transactions, but we don't see any impact of this scrubbing, till AR went slow done(latency is growing) while scrubbing is continues.
After we found the scrubbing information from cell alert logs, we decide to start scrubbing different times. Then we set a different start time
for every cell.
For example;
One cell disk scrubbing takes 44 hours and we have 7 cells. So I didn't change the interval(biweekly). After Cell 1 disk scrubbing is finished,
cell 2 will start, after cell 2 is finished cell 3 will start etc. So in this method, only one cell will do scrubbing in a meantime.
dcli -g cell_group -l root cellcli -e "list cell attributes name,hardDiskScrubInterval,hardDiskScrubStartTime"
x6celadm07: x6celadm07 biweekly 2018-04-22T05:00:00+03:00
x6celadm06: x6celadm06 biweekly 2018-04-20T08:00:00+03:00
x6celadm05: x6celadm05 biweekly 2018-04-18T11:00:00+03:00
x6celadm04: x6celadm04 biweekly 2018-04-16T14:00:00+03:00
x6celadm03: x6celadm03 biweekly 2018-04-14T17:00:00+03:00
x6celadm02: x6celadm02 biweekly 2018-04-12T20:00:00+03:00
x6celadm01: x6celadm01 biweekly 2018-04-10T23:00:00+03:00
After this implementation, AR(attunnity replicate) latencies was going down.
ORA-00979: not a GROUP BY expression
Problem:
-----------
ORA-00979: not a GROUP BY expression
We have upgraded our database from 11.2.0.4 to 12.2.0.1.
After upgrade one sql got "ORA-00979: not a GROUP BY expression" error.
Solution:
-----------
It was similar to Bug 18749211 ORA-979 FROM SELECT WITH COLUMN MASKING VPD AND VIEW MERGING VPD, but we don't use VPN or anything else, only we upgraded the database.
Also, /*+ materialize */ didn't work advice as a workaround in bug document.
Then I found another bug 27170305: ORA-00979 WHEN A CASE STATEMENT IS THERE IN THE "GROUP BY" EXPRESSION
Workaround:
---------------------
/*+ optimizer_features_enable('11.1.0.6') */
But these hints caused performance problems, we need to apply the patch if it could be done :)
So, if this patch is not available for your version like us, you need to ask to oracle support.
Now, this patch is ready for our system in a one week:)
2 Kasım 2015 Pazartesi
Netezza Backup & Restore via Networker
EMC Networker supports Netezza backup configuration.Also we use this backup methology.
Previous week we deleted some row unfortunately.So we had to restore this table before the deletion was completed.
But we are newbie this issue,netezza restore.
Finally we hit restore command and restored it.But default restore without any date parameter,networker restores last backup.But I need to restore 3 previous backup.
So we hit the 3 previous backup set id from the backup log.
Then we restored it.
Backup command:
-----------------------
nzbackup -db XX_DWH -connector networker -connectorArgs NSR_SERVER=networkerist.doganay.com.tr:NSR_DATA_VOLUME_POOL=DDBoostNetezza
Restore Command:
------------------------
nzrestore -db XX_DWH_RST -sourcedb XX_DWH -backupset 20151017201912 -connector networker -connectorArgs NSR_SERVER=networkerist.doganay.com.tr:NSR_DATA_VOLUME_POOL=DDBoostNetezza -tables XX_HIST -sourceschema XX_ADMIN
XX_DWH_RST -------->Empty database which created by us.
XX_DWH --------------->Source database which backed up.
20151017201912------->Backup set id(You can't find this id from backup admin,you have to mine netezza backup log)
networkerist.doganay.com.tr-------->networker hostname
XX_HIST --------------->table name which you want to restore it.
XX_ADMIN------------>table's schema
Restore output:
-------------------
Restore of increment 1 from backupset 20151017201912 to database 'XX_DWH_RST' committed.
Restore of increment 2 from backupset 20151017201912 to database 'XX_DWH_RST' committed.
Restore of increment 3 from backupset 20151017201912 to database 'XX_DWH_RST' committed.
Restore of increment 4 from backupset 20151017201912 to database 'XX_DWH_RST' committed.
Restore of increment 5 from backupset 20151017201912 to database 'XX_DWH_RST' committed.
Restore of increment 6 from backupset 20151017201912 to database 'XX_DWH_RST' committed.
Pray & Tray:)
2 Ekim 2015 Cuma
ORA-01804 failure to initialize timezone information dbua
Problem:
-----------
ORA-01804 "failure to initialize timezone information"
While upgrading 11.1.0.7 database to 12.1.0.2,I got an error when run the dbua.
Solution:
-----------
unset ORA_TZFILE
17 Eylül 2015 Perşembe
Crfclust.bdb IS TOO BIG
Problem:
------------
Crfclust.bdb file is growing.
We have ODA machine which is RAC with two nodes.
Everyday we have got /u01 disk space alert.We cleaned trace files,cdmp,incdir vs.But disk space a bit free.
One day I found crfclust.bdb file which size is 32G in $GRID_HOME
This file used to Cluster Health Monitor.But there is a problem with its size.
This is a bug which related to ODA RAC system.
[NECO1]/u01/app/11.2.0.4/grid/crf/db/neco1 $ du -sh crfclust.bdb
32G crfclust.bdb
Solution:
----------
[grid@neco1 ~]$ oclumon manage -get repsize
CHM Repository Size = 204737600
Done
This size is repositery retention.Our retention is 6,58 year.This is incredible:))
This size should be between 3600 (1 hour) and 259200 (3 days).
------------------------------------------------------------------------------------
[grid@neco2 ~]$ oclumon manage -repos resize 259200
neco1 --> retention check successful
neco2 --> retention check successful
New retention is 259200 and will use 4524595200 bytes of disk space
CRS-9115-Cluster Health Monitor repository size change completed on all nodes.
Done
------------------------------------------------------------------------------------
4524595200 bytes means 4GB.
------------------------------------------------------------------------------------
Then check the size;
[grid@neco2 ~]$ oclumon manage -get repsize
CHM Repository Size = 259200
Done
--------------------------------------------------------------------------------------------------------------------------
If you get error like below while running 'oclumon manage -get repsize',you should stop and start ora.crf via 'crsctl stop res ora.crf -init'
CRS-9011-Error manage: Failed to initialize connection to the Cluster Logger Service
--------------------------------------------------------------------------------------------------------------------------
In my case I didn't need to bounce the ora.crf.
It is very nice size:))
[root@neco1 neco1]# du -sh crfclust.bdb
2.2M crfclust.bdb
Reference note:
-------------------
ODA Nodes Lacking Space Due to Large Cluster Health Monitor File Crfclust.Bdb (Doc ID 1616910.1)
Kaydol:
Kayıtlar (Atom)