Collation of common Oracle operation and maintenance commands

Time:2021-8-18

Collation of common Oracle operation and maintenance commands

1、 Oracle database creation and deletion commands

(1)oracle11g

Build a database (it is customary to configure gdbname as the SID name, and sys password as the system password to facilitate memory)

[[email protected] ~]$ dbca -silent -createDatabase -templateName /u01/oracle/product/11.2.0/dbhome_1/assistants/dbca/templates/General_Purpose.dbc -gdbname GDBNAME -sid SIDNAME -characterSet AL32UTF8 -NATIONALCHARACTERSET UTF8 -sysPassword SYSPASSWORD -systemPassword SYSTEMPASSWORD -TOTALMEMORY 2048

Database deletion method 1:

[[email protected] ~]$ dbca -silent -deleteDatabase -sourceDB  SIDNAME -sysDBAUserName sys -sysDBAPassword SYSPASSWORD

Delete library method 2:

#Step 1: configure the response file:
[[email protected] ~]$ cat /u01/oracle/response/dbca.rsp
OPERATION_TYPE = "deleteDatabase"
SOURCEDB = "SIDNAME"
SYSDBAUSERNAME = "sys"
SYSDBAPASSWORD = "SYSPASSWORD"
#Step 2: delete the response file library:
[[email protected] ~]$ dbca -silent -responseFile /u01/oracle/response/dbca.rsp

(2)oracle12c   Build database

[[email protected] ~]$ dbca -silent -createDatabase -templateName /u01/oracle/product/orahome/assistants/dbca/templates/General_Purpose.dbc  -gdbname GDBNAME -sid SIDNAME -characterSet AL32UTF8 -NATIONALCHARACTERSET UTF8 -sysPassword SYSPASSWORD -systemPassword SYSTEMPASSWORD -TOTALMEMORY 2048
#The database deletion command is the same as above. Note that after running the database deletion command, you need to manually delete the remaining directories

(3) Deleting usually automatically deletes the library information in the following paths or files

A:/u01/oracle/admin/SIDNAME
B:cat /etc/oratab
C:/u01/oracle/oradata/SIDNAME

The following paths require manual cleanup

D:/u01/oracle/cfgtoollogs/dbca/SIDNAME
E:/u01/oracle/diag/rdbms/SIDNAME
F:/u01/oracle/product/11.2.0/dbhome_1/dbs/hc_SIDNAME.dat

2、 Create the account and password corresponding to the library

1. Switch sid

[[email protected] ~]$ export ORACLE_SID=SIDNAME

2. Toggle character set

#View the character set of Oracle Database
SQL> select userenv('language') from dual;
#View the code of Oracle Database
SQL> select * from nls_database_parameters where parameter ='NLS_CHARACTERSET';
[[email protected] ~]$ set NLS_LANG=AMERICAN_AMERICA.AL32UTF8     #windows_os
[[email protected] ~]$ export NLS_LANG=AMERICAN_AMERICA.AL32UTF8  #linux_os

3. Create user information corresponding to the library

SQL> create temporary tablespace SIDNAME_temp tempfile '/u01/oracle/oradata/SIDNAME/SIDNAME_temp.dbf 'size 64m autoextend on next 64m maxsize unlimited extent management local;
SQL> create tablespace SIDNAME_data logging datafile '/u01/oracle/oradata/SIDNAME/SIDNAME_data.dbf' size 64m autoextend on next 64m maxsize 2048m extent management local;
SQL> create user USERNAME identified by USERPASSWORD default tablespace SIDNAME_data temporary tablespace SIDNAME_temp;
SQL> grant connect,resource to USERNAME;
SQL> grant create view to USERNAME;
SQL> grant unlimited tablespace to USERNAME;
SQL> grant create public synonym to USERNAME;
SQL> grant drop public synonym to USERNAME;
SQL> create or replace directory dir_dump  as '/u01/oracle/backup';
SQL> grant read,write on directory dir_dump to USERNAME;
SQL> ALTER PROFILE DEFAULT LIMIT PASSWORD_LIFE_TIME UNLIMITED;

– modify the processes and sessions parameter values according to the instance environment (Oracle database needs to be restarted)

SQL> alter system set processes=1000 scope=spfile;  
SQL> alter system set sessions=1105 scope=spfile; 

Knowledge points:

  • Oracle 11g: the definition of sessions value should be greater than or set to 1.1 processes + 5. If it is less than 1.1 processes + 5, Oracle will automatically set this parameter to 1.1 processes + 5 when starting. This mainly takes into account the sessions initiated by the background process and about 10% of recursive sessions.
  • oracle12c:1.1processes+22
#Query the current Oracle concurrent connections:
SQL> select count(*) from v$session where status='ACTIVE';
#To view the number of connections for different users:
SQL> select username,count(username) from v$session where username is not null group by username;
#View all users:
select * from all_users;
#Current number of connections
select count(*) from v$process;
#Maximum number of connections allowed for the database
select value from v$parameter where name = 'processes';

3、 Restore and backup commands of database

To view the storage path of expdp export backup:

sql> select * from dba_directories;

1. Backup and restore operations for the entire library

  • (1) . database backup (note that sometimes sidname and schemasname are inconsistent. Pay attention to distinguish when using. Generally, the same name is configured to facilitate memory and operation and maintenance. (the parallel parameter adds a reasonable value according to the configuration of the server memory.)
#Backup:
[[email protected] ~]$  expdp USERNAME/[email protected] schemas=SCHEMASNAME dumpfile=SIDNAME`date +%Y%m%d`.dmp directory=dir_dump parallel=2
12
  • (2) , restore

Case 1. The instance names of the original library and the target library are different (note that the data tablespace name of some environments is not sidname)_ Data, pay attention to verification when using)

#Format:
[[email protected]  ~]$  impdp   USERNAME/ [email protected]   schemas=SCHEMASNAME   DUMPFILE=XXXX.dmp    DIRECTORY=dir_ dump   remap_ Schema = source schemasname: target schemasname   remap_ Tablespace = source_ Data: target_ data

Case 2: the instance names of the source library and the target library are the same

#Format:
[[email protected] ~]$ impdp USERNAME/[email protected] schemas=SCHEMASNAME DUMPFILE=XXXX.dmp  DIRECTORY=dir_dump EXCLUDE=STATISTICS

Knowledge extension: exclude statistics when restoring with exclude = statistics. You can use the following commands to complete statistics

#The command is as follows:
SQL> exec dbms_stats.gather_schema_stats(ownname=>'SIDNAME',estimate_percent=>10,degree=>8,cascade=>true,granularity=>'ALL');

2. For single table backup and restore operations

  • (1) . backup sheet
Format:
[[email protected] ~]$ expdp USERNAME/[email protected] dumpfile=tablenameXXXX.dmp DIRECTORY=dir_dump tables=TABLENAME
  • (2) . restore single table
#Format:
[[email protected] ~]$ impdp USERNAME/[email protected] dumpfile=tablenameXXXX.dmp DIRECTORY=dir_dump TABLES=TABLENAME TABLE_EXISTS_ACTION=REPLACE

Extension: table_ exists_ Action parameter description

When importing data with imp, if the table already exists, drop the table first and then import it.
When impdp is used to complete database import, if the table already exists, there are four processing methods:
Parameter (1) skip: default operation
Parameter (2) replace: drop the table first, then create the table, and finally insert data
Parameter (3) append: adds data to the original data
Parameter (4) truncate: truncate before inserting data
  • (3) Backup multiple tables
#Format:
[[email protected]  ~]$  expdp   USERNAME/ [email protected]   dumpfile=tablenameXXXX.dmp   DIRECTORY=dir_ dump   Tables = source tablename1, source tablename2
  • (4) Restore multiple tables
#Format:
[[email protected]  ~]$  impdp   USERNAME/ [email protected]   dumpfile=tablenameXXXX.dmp   DIRECTORY=dir_ dump   remap_ Table = source tablename1: destination tablename11   TABLE_ EXISTS_ ACTION=REPLACE
[[email protected]  ~]$  impdp   USERNAME/ [email protected]   dumpfile=tablenameXXXX.dmp   DIRECTORY=dir_ dump   remap_ Table = source tablename2: destination tablename22   TABLE_ EXISTS_ ACTION=REPLACE

3. Expand knowledge

  • 1) Extension 1:

Case 1. When the high version is exported and restored to the low version, for example, when 12 is restored to 11, the version number of the low version is added when 12C executes the export, version = 11.1.0.2.0

#Format:
[[email protected] ~]$ expdp USERNAME/[email protected] schemas=SCHEMASNAME dumpfile=XXX.dmp DIRECTORY=dir_dump version=11.1.0.2.0

Situation 2. When restoring a low version to a high version, the high version is generally compatible with the low version. At present, there is no problem in restoring 11 to 12 in personal operation and maintenance.

  • 2) Extension 2:

(1) . backup according to the specified size, such as 5g each (parallel is multi-threaded processing, the number of threads should be less than the number of generated files, and the number of threads should be less than the number of CPU threads)

#Backup format
[[email protected] ~]$ expdp USERNAME/[email protected] schemas=SCHEMASNAME DIRECTORY=dir_dump dumpfile=XXX_%U.dmp logfile=expdpXXX.log filesize=5G parallel=16

(2) . restore multiple backup files:

#Restore format
[[email protected] ~]$ impdp USERNAME/[email protected] schemas=SCHEMASNAME DIRECTORY=dir_dump dumpfile=XXX_%U.dmp logfile=impdpXXX.log parallel=16

3) Extension 3

Export filter does not export a table:

#Format
[[email protected] ~]$ expdp USERNAME/[email protected] schemas=SCHEMASNAME DIRECTORY=dir_dump dumpfile=XXX.dmp  exclude=TABLE:"IN 'TABLENAME'"

4) Extension 4

When restoring different libraries, the data structure will not be changed. Use the truncate parameter:

#Format
[[email protected]  ~]$  impdp   USERNAME/ [email protected]   schemas=SCHEMASNAME   DIRECTORY=dir_ dump   dumpfile=XXX.dmp   remap_ Schema = source schemasname: target schemasname   remap_ Tablespace = source_ Data: target_ data   TABLE_ EXISTS_ ACTION=truncate

5) Extension 5

Keep the table and clear the table data when exporting the backup (query parameter):

#Format
[[email protected] ~]$ expdp USERNAME/[email protected] schemas=SCHEMASNAME DIRECTORY=dir_dump dumpfile=XXX.dmp logfile=expdpXXX.log  query=TABLENAME1:'" where 1=2"',TABLENAME2:'" where 1=2"',........

6) Extension 6

When exporting a backup, keep the table, clear the table data, and filter two tables at the same time

#Format
[[email protected] ~]$ expdp USERNAME/[email protected] schemas=SCHEMASNAME DIRECTORY=dir_dump dumpfile=XXX.dmp logfile=expdpXXX.log  query=TABLENAME1:'" where 1=2"',TABLENAME2:'" where 1=2"' exclude=TABLE:"IN 'TABLENAME1''TABLENAME2'"

7) Extension 7

Only the data of each table in the database is counted, but not exported. The parameter estimate_ only=y

#Format
[[email protected] ~]$ expdp USERNAME/[email protected] schemas=SCHEMASNAME DIRECTORY=dir_dump estimate_only=y

8) Extension 8

To import a single table to a temporary table:

#Format (add parameter exclude = statistics when statistics and indexes are involved)   EXCLUDE=INDEX)
- the SQL implementation copies from this table to another temporary table
SQL>   CREATE   TABLE   Destination tablename    AS  ( SELECT  *  FROM   Source tablename);
- clear the data in the table
SQL>   delete    from   Target tablename;
[[email protected]  ~]$  impdp   USERNAME/ [email protected]   DIRECTORY=dir_ dump   DUMPFILE=tablenameXXX.dmp   remap_ Table = source tablename: destination tablename   TABLE_ EXISTS_ ACTION=REPLACE   EXCLUDE=STATISTICS   EXCLUDE=INDEX
 [[email protected] ~]$ expdp USERNAME/[email protected] schemas=SCHEMASNAME ESTIMATE_ONLY=y NOLOGFILE=y FULL=y

9) Extension 9

Individual SQL scripts are very long. They will always get stuck when executed with PLSQL developer tool. When there is no response, they can be executed not only in the command window using PLSQL developer tool, but also in the shell terminal

#Format
[[email protected] ~]$ export ORACLE_SID=SIDNAME
[[email protected] ~]$ sqlplus  / as sysdba
sql> conn USERNAME/[email protected]
sql> @/u01/oracle/backup/XXX.sql

10) Extension 10

How to correctly terminate expdp and impdp tasks is as follows:

Step 1: view the view DBA_ datapump_ jobs
select job_name,state from dba_datapump_jobs;
Step 2: correctly stop the expdp export task and use stop_ job
expdp USERNAME/[email protected] attach=SYS_EXPORT_SCHEMA_02
Step 3: stop the task
Export> stop_job=immediate 
Are you sure you wish to stop this job ([yes]/no): yes  
Step 4: view the backup job status in the system
select owner_name,job_name ,state from dba_datapump_jobs;

Extension: the following commands are valid in interactive mode:

HELP:   Summarize interactive commands.
KILL_ JOB:   Detach and delete jobs.
PARALLEL:   Change the number of active workers for the current job.
PARALLEL=.2
START_ JOB:   Start / resume the current job.
START_ JOB=SKIP_ Current skips any operation performed when the job stops before starting the job.
Status: the frequency (in seconds) of job status to be monitored when the default value (0) will show the new status when available.
STATUS[=interval]
STOP_ JOB:   Close the executed jobs sequentially and exit the client.
STOP_ Job = immediate will immediately shut down the data pump job.

11) Extension 11

#View NLS in DB_ Value of characterset
SQL> select * from v$nls_parameters  where parameter='NLS_CHARACTERSET'; 
SQL> select * from nls_database_parameters t where t.parameter='NLS_CHARACTERSET';

12) Extension 12

About exp, imp, expdp, impdp

(1) , exp and imp are client-side tools that can be used on both the client side and the server side.
(2) , expdp and impdp are server-side tools. They can only be used on the Oracle server, not on the client.
(3) . imp is only applicable to files exported by exp, not to files exported by expdp; Impdp only applies to files exported by expdp, not to files exported by exp.
(4) . for servers above 10g, empty tables with 0 rows of data cannot be exported using exp, but must be exported using expdp.

13) Extension 13

When the Oracle user password has @ sign, the connection mode between expdp and sqlplus and the compressed backup files using rar

@echo off
rem ---- dmp backup directory, same as dump dir
set backup_dir=e:apporaclebackup
rem ---- today, day for dmp file remaining
set day=%date:~0,4%%date:~5,2%%date:~8,2%
set remain_day=7
rem --- delete files before 7 days
forfiles /p "%backup_dir%" /d -%remain_day%  /c "cmd /c del /f @path"
rem --- export oracle data to dmp file
expdp   User name/“ [email protected] "@orcl   directory=dir_ dump   Dumpfile = user name% day%.dmp   Logfile = user name% day%.log   Schemas = user name   parallel=4   compression=ALL
rem --- sqlplus conn
sqlplus   User name / ""“ [email protected] """@orcl
rem ---- if compress the dumpfile and delete source dumpfile, unmark rem
set rar="C:Program Files (x86)WinRARWinRAR.exe"
%rar%   a  - df  % backup_ Dir% username% day%.rar  % backup_ Dir% username% day%.dmp  % backup_ Dir% username% day%.log

4、 Clean up (error during restore, clear user tablespace)

1. Delete data tablespace:

#Execute statement:
[[email protected] ~]$ sqlplus / as sysdba
SQL> drop tablespace mepro_data including contents and datafiles cascade constraint;

2. Delete temporary tablespace:

#Execute statement:
SQL> drop tablespace mepro_temp including contents and datafiles cascade constraints;

3. Delete user:

#Execute statement:
SQL> drop user srmhdld cascade;

4. It is reported that the user is connecting and cannot be deleted

---Method 1: restart and quickly execute the drop user statement (personal recommendation)
SQL> shutdown immediate;
SQL> startup
---Method 2: delete the session being connected (when there are many connected sessions, the cleaning takes time and is not as fast as method 1)
#Query user session
SQL> select username,serial#,sid,program,machine,status from v$session where username='USERNAME' AND STATUS='ACTIVE';;
SQL> select saddr,sid,serial#,paddr,username,status from v$session where username is not null;
---Delete related user sessions
SQL> alter system kill session 'serial#, sid';

Source:https://www.toutiao.com/a6885…

Collation of common Oracle operation and maintenance commands

Recommended Today

Swift advanced (XV) extension

The extension in swift is somewhat similar to the category in OC Extension can beenumeration、structural morphology、class、agreementAdd new features□ you can add methods, calculation attributes, subscripts, (convenient) initializers, nested types, protocols, etc What extensions can’t do:□ original functions cannot be overwritten□ you cannot add storage attributes or add attribute observers to existing attributes□ cannot add parent […]