Auto Deployment of Eclipse WAR fiel to Tomcat using ANT Scripts

0

Category:

Auto Deployment of Eclipse WAR fiel to Tomcat using ANT Scripts
Eclipse allows us to run the application and then export the application to a standard WAR file. But then someone has to manually do the following steps :
Export the project to a WAR file using Eclipse Interface
Copy the WAR file from the location created to the Tomcat webapps folder
Here is a better alternative and a solution to the question :
Create an ANT script that will look at the Eclipse Project structure and build the WAR file and copy over the WAR file to the destination (Tomcat WebApps folder)
Here is the ANT script:




















Save the above code into a file and name it “build.xml”. Place the build.xml into the root folder of your eclipse project. For example, if “c:\projects” is my eclipse workspace and “SanFrancisco” is the name of my project then all the files related to the project will be under “c:\projects\SanFrancisco”.
You can run the ant command inside the projects root folder either using any ANT tool installed or if you have ArcGIS Java ADF installed then there is a customized ANT tool executable available for use.
Here is the usage:
1. Create the WAR file and deploy it to tomcat
"c:\program files\arcgis\java\tools\ant\build\arcgisant" deploy
2. Just create the WAR file
"c:\program files\arcgis\java\tools\ant\build\arcgisant" create
3. Just copy over the WAR file to deploy location
"c:\program files\arcgis\java\tools\ant\build\arcgisant" copy
4. Unpack the WAR file to see the contents
"c:\program files\arcgis\java\tools\ant\build\arcgisant" unpack

Unpacking and Repacking EAR and WAR files

0

Category: , ,

Unpacking and Repacking EAR and WAR files
The procedures discussed in this chapter are specific to the Microsoft Windows platform.
To unpack the .ear or .war file:
1. Open the command prompt, and navigate to the folder that contains the EAR or WAR file.
2. Create a folder named "[Temporary EAR location]” or "[Temporary WAR location]”.
3. Go up one level to the folder that contains the EAR or WAR file.
4. Select the EAR or WAR file you want to unpack, right-click the file, select the Open with option,
and select WinZip to unpack the EAR or WAR file to the temporary location "[Temporary EAR
location]” or "[Temporary WAR location]”.
Note: You can choose to configure your computer to consider EAR or WAR files as ZIP files,
by performing the following steps:
1. Go to the Control Panel.
2. Double-click Folder Options.
The Folder Options dialog box is displayed.
3. Click the File Types tab.
4. Navigate to the EAR or WAR extension in the Registered file types list.
5. Click Change.
The Open With dialog box is displayed.
6. Select WinZip in the Programs list.
7. Click OK.
8. Click Apply.
9. Click OK.
To repack the .ear or .war file:
1. Access one folder level above "[Temporary WAR location]”.
2. Select the "[Temporary WAR location]”.
3. Right-click the folder, select WinZip, and choose the Add to Zip option.
4. In the Add to archive field of the Add dialog box, specify the name of the .ear or .war file you
are repacking.
5. Click Add.

How to Test a Database Connection String using NotePad

0

Category:

How to test a data providers connection string (like a SQL Server database) with the help of a plain text file using Notepad.

To investigate and test out if your connection string works, your going to want to create a UDL file. To do this, follow these steps:
Open up Notepad and create an empty text file, then
click File -> click Save ->
save it with the File name: TestConnection.udl to your desktop.
Go to your desktop and double-click on the TestConnection.udl file you just created and the Data Link Properties box will popup.
Select the Provider tab and Find the provider that you want to connect with and click Next >>.
Now from the Connection tab, select or enter your source/ server name -> then enter information to log on to server -> and select the database on the server.
Click Test Connection and click OK to save the file.
Note: If errors occur during testing of your connection string, you will get a popup box with the error message.
Once, you've successfully tested your connection string, now go and compare the details of your TestConnection.udl with your (website) project connection string to see if they are similiar.

OutOfMemoryError: Heap space and PermGen space

0

Category:

Solution for Tomcat Server

It’s quite common to run In memory problems when running some big Java EE application on a Tomcat server.Some of the most commmon errors are like the following ones.
This is about a full Heap space:
SEVERE: Servlet.service() for servlet jsp threw exceptionjava.lang.OutOfMemoryError: Java heap spaceThis other is about the PermGen space that’s a memory area, where compiled classes (and JSPs) are kept, and this error might happen often if the running web application have many .java and .jsp.
MemoryError: PermGen spacejava.lang.OutOfMemoryError: PermGen spaceTo increase the memory available to Tomcat, about heap and permgen the correct options are the following ones.
This sets the max heap available to Tomcat at 1Gb of memory:
--JvmMx 1024
This sets the max permgen available to Tomcat at 256Mb of memory:
-XX:MaxPermSize=256m
Just update memory settings from a GUI frontend, or to view what happened after running the command line tool. Running the following command:
tomcat6w //ES//Tomcat6a
window will open showing all the parameters about the windows service Tomcat6.
It’s possible to see in this image that, after running the previous command, for setting higher memory limits, in the sections Maximum memory pool and at the end of the Java Options the new memory limits are set.
Memory Settings on Windows
Solution for Unix based OS(Ubuntu, Linux, Solaris..etc)
It needs to increase the memory by making changes in catalina.sh file.
Follow the following(remove any spaces from the below given line when you add to script)
steps :1) vi /usr/local/jakarta/tomcat/bin/catalina.sh
Add below given line into the catalina.sh file.
JAVA_OPTS="-Djava.awt.headless=true -Dfile.encoding=UTF-8 -server -Xms512m -Xmx1024m -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+DisableExplicitGC"
Happy Coding

Content Server install/uninstall/reinstall error solutions on Windows Platform

0

Category: ,

What to do if your Content server installation doesnt work and you have to install again and its not being properly uninstalled by the program.

Follow the below given steps.

1-delete all the folders related to Documentum from the sysetem , c: ,program files, My Documents. search all the folders named Documentum and its subfolders, delete them all.
2-Delete all the Registry keys from the Windows
3- Start -> Run: regedt32 -> OK (logged on as administrator)
4- Search Documentum using Edit->Search-> Use F3 for going to next item of search.
5- Delete all the registrys
6- Restart the system.

But after doing this , you still have some enteries in legacy which are not easily deleted . You have to follow below given steps.

LEGACY keys cant be deleted(easily) so follow below given steps.

1) Start -> Run: regedt32 -> OK (logged on as administrator)
2) Now to HKLM sub-window and take the key which you want deleted. Highlight it.
3) In the Regedt32 window menu select Security -> Permissions.
4) Change the access permissions for Everyone and/or Administrators to Full Control. Apply the settings.
5) Repeat 1-4 for all keys you need deleted.
6)Close Regedt32 and open Regedit. Delete the keys you don't want anymore.
7) Restart the system and search for the keys again to make sure that they are deleted from the system.

Now you are good to go for installing fresh Content Server installations.

Any positive/negative feedback is greatly appreciated.

How to drop all the tables , views from the oracle schema for recreating the repository?

0

Category: , , ,

How to drop all the tables , views from the Oracle DB schema for recreating the repository?

1-

Create one sql file and name it -createdroptables.sql and write below given queries into this file


SET SERVEROUTPUT ON;

SPOOL C:\droptables.LOG;

SELECT * FROM (SELECT 'DROP TABLE '||table_name||' CASCADE CONSTRAINTS;' FROM user_tables UNION
SELECT 'DROP VIEW '||VIEW_NAME||';' FROM user_views UNION
SELECT 'DROP SEQUENCE '|| SEQUENCE_NAME||';' FROM user_sequences UNION
SELECT 'DROP SYNONYM ' || SYNONYM_NAME ||';' FROM user_synonyms UNION
SELECT 'DROP FUNCTION ' || OBJECT_NAME ||';' FROM user_procedures UNION
SELECT 'PURGE RECYCLEBIN;' FROM dual) ORDER BY 1 ASC;

SPOOL OFF

2 Run this sql file on sqlplus console

SQL> @C:\createdroptables.sql

3 Rename the droptables.log file to droptables.sql which is created in c:\ drive

4 Run this sql file on sqlplus console

SQL> @C:\createdroptables.sql

5 Check the tables in schema by running below given query

select table_name from user_tables;

6 You should get no table name ie its now empty schema.

Good Luck recreating repository.

How to integrate Netegrity with Documentum?

0

Category: , ,

Documentum provides one authentication plug-in with Content Server, this plug-in
allows you to use the Netegrity SiteMinder Policy Server with Content Server. The
plug-in supports Web-based Single Sign-On (SSO) and strong authentication.


Documentum Netegrity Authentication Plugin 'dm_netegrity'
=========================================================
The Documentum Netegrity Authentication Plugin allows the Documentum Content Server to authenticate users based on Netegrity Single Sign-On tokens instead of passwords. This enables Documentum web application for Netegrity Single Sign-On. In order to use this plugin, it is necessary to purchase the Netegrity SiteMinder product.

Before installing documentum netegrity plugin, Please check if the following requirements are met

1. Create 4.x web-agent on policy server using the Policy Server User Interface. Click the check box for
"Support 4.x agents" and enter the relevant information like shared secret. This is required because the plugin is
custom agent and Policy Server will communicate with the plugin only when the "support for 4.x agents" option is
enabled. (See Policy Server Design Manual).

2. Check whether dm_user object created for the netegrity user has either user_name, user_os_name or user_ldap_dn attribute
set to the value that matches the user credentials that was used to get the token at the application server side
integration.
This is required because plugin not only validates the token, it retrieves the user credentials
from the session specification. The plugin checks if this value from the session
specification matches with any one of the settings of the user_name, user_os_name or user_ldap_dn.


To install the Documentum Netegrity authentication plugin, follow these instructions:

1. Copy the file
dm_netegrity_auth.dll (Windows) or
dm_netegrity_auth.so (Solaris / AIX / Linux)
dm_netegrity_auth.sl (HPUX)
to the authentication plugin location (usually $DOCUMENTUM/dba/auth).

2. Copy the file dm_netegrity_auth.ini to the same location.
Edit this file and set all mandatory parameters.

3. Copy the supporting shared libraries:
Windows: copy the files smagentapi.dll & smerrlog.dll to %DM_HOME%\bin
Solaris/AIX: copy the files libsmagentapi.so & libsmerrlog.so to $DM_HOME/bin
Linux: copy the files libsmagentapi.so, libsmcommonutil.so & libsmerrlog.so to $DM_HOME/bin
HPUX: copy the files libsmagentapi.sl & libsmerrlog.sl to $DM_HOME/bin

4. Restart the docbase. You can verify that the plugin has been loaded by looking in the main server log file ($DOCUMENTUM/dba/log/.log) for an entry starting with "[DM_SESSION_I_AUTH_PLUGIN_LOADED]info".

This completes the server-side installation. Refer to the WDK documentation to setup the application server side.


To test the plugin infrastructure turn on server tracing flag "trace_authentication". The tracing information will be
written to the server log and plugin specific tracing will be written into dm_netegrity_.log that resides in
$DOCUMENTUM/dba/log directory.

Manage the User cabinet’s visibility to users in Documentum Server?

0

Category: , ,

How to manage the User cabinet’s visibility to users in Documentum Server?

Scenario1: Only owners will be able to see their cabinets , not even super users and System administrator will be able to see their cabinets.


1: Make is_private = 1 which is by default


Scenario 2: Some group (admin users or some other group of users) wants to see the cabinet of all users

1: Make is_private = 0 attribute of the dm_cabinet objects.
2: Create a default_user_cabinet_acl
Where
Dm_world = none
Admingroup = read
Owner = delete

3: apply above ACL on all the cabinets
4: you can apply above created ACL on the dm_cabinet type so that whenever a new cabinet is created, this ACL is applied automatically, One advantage of this approach is , you can change this ACL at anytime and all the cabinets will be affected by this change for example, you want to add one more group of users who wants to see all the cabinets in your organization like managers than you can add that group to this acl and assign them access.

Scenario 3: When you are integrating LDAP authentication and wants to create default cabinets for users.

1: you can create cabinet by mapping users
2: you can create cabinet by passing one more attribute to the
ldapjobSyn method
-create_default_cabinet true
Insert this before –full_sync =false/true parameter

This creates cabinet with the user’s last name, first name example Singh, Kulveer but in this case, the ACL which you created in Scenario2 was not applied to the cabinet even though you applied the ACL on dm_cabinet
type itself, this is strange behaviour, I don’t know why Documentum does so.

OR
You can create cabinet by mapping LDAP attribute in LDAP config object

Goto DA->Basic Configuration->LDAP Servers->Mappting tab->property Mapping table-> Add new property

Where you can map dm_user objects default_folder attribute to the LDAP attribute for example

Default_folder = {$sn}

Refer Documentum Administrator guide for more detail regarding this LDAP mapping.

3: Step 2 will create cabinets only for the new users which are being pulled by the LDAP in both the cases but if you want to create cabinets for all the users which are already pulled into the Documentum than
A: Create users by script for existing users and make is_private = 0
OR
B: Delete all the users in Documentum and run the LDAPSync job and all the users will be created with their default cabinets. Note: you may loose some group and other ACL information which may be lost when you delete the users but it all again depends on your LDAP configuration.

4: in step 3 user’s cabinets are created but you have to run the script to make the is_private=0 and apply the ACL created in Scenario 2. This task can be done by creating job or changing the existing LDAPSync job to change the is_private=0 and apply the ACL . This totally depends on the complexity of your systems user base.

You can use any of the approaches based on your requirement and complexity of the system.

I would like to hear the comments for the above approaches. Let’s make it better by sharing our knowledge.

All About Dump and Load in Documentum for Docbase Migration

0

Category: , ,

All About Dump and Load in Documentum for Docbase Migration

-Dump and load is a feature built into Documentum.
- It allows you to take the entire contents of your docbase, and write them out to a single file. This can be done not only for content, but for users, groups, acls, etc.
-The file that gets written out is in a proprietary format, so the only thing you can do with it is load it into another docbase, thus "dump and load".
-However, this feature is often used for:
• Backing up your docbase (although you will still probably want a more reliable backup process, like a file system backup).
• Moving or copying your docbase to a new machine or environment (ie, creating a test docbase from your production docbase).
• It is also frequently the recommended upgrade path when moving to a new version of Documentum's server product.

- Run dm_clean,dm_logpurge,dm_filescan before dumping a Docbase. This avoids dumping unwanted objects.

- Docbase should be down.

- Should not be used by users.

- All jobs which are active must be inactive.

Dump:
-The most common method for performing this is using Documentum's API interface.
-On the server that Documentum is installed you will find a file called iapi32.exe.
-Look in your Documentum server directory under product\4.x\bin. Run this application from a DOS window.
-It will prompt you for the docbase you wish to connect to, as well as a username and password.
-Connect to the docbase you wish to dump.
-You will soon be at an api command prompt.
-From this prompt, you can issue any of Documentum's api commands.
-In this case we will be issueing the command to create a dump file from the docbase you are currently connected to.
-In most cases, the following set of
commands will dump all of your relevant information out of your docbase.
- It will extract all of your content objects (and their content), as well as your formats and users and any kind of objects with thier data :

create,c,dm_dump_record
set,c,l,file_name
c:\path\fileName.dump
set,c,l,include_content
2=2

append,c,l,type
dm_sysobject
append,c,l,predicate
2=2

append,c,l,type
dm_format
append,c,l,predicate
2=2

append,c,l,type
dm_user
append,c,l,predicate
2=2

append,c,l,type
dm_assembly
append,c,l,predicate
2=2

append,c,l,type
dm_group
append,c,l,predicate
2=2

append,c,l,type
dm_relation
append,c,l,predicate
2=2

append,c,l,type
dm_relation_type
append,c,l,predicate
2=2


append,c,l,type
dmi_queue_item
append,c,l,predicate
2=2


And so on , write the above command for all the object types you want to
export.
#
# NOTE: also dump any user defined non-sysobject types.
#
save,c,l
getmessage,c


-This script dumpts all dm_sysobject objects, dm_format objects, and dm_user etc, objects from you docbase.
-Also the content of these objects will be included.
-You will notice that for each object type we append a predicate of "2=2".
-Since the predicate is required, this is a way of tricking Documentum into exporting all objects.
- You could have used other criteria, such as:
• object_name='xyz'
• folder('/folderName1',descend/)
-Once the dump is complete, you will have a file c:\path\fileName.dump that contains all of your docbase information.
-This file can then be loaded into a new docbase of your choice.

Please feel free to comment. i would like to hear more suggestions,improvements for this

How to configure LDAP object in Documentum

0

Category: , , ,

How to configure LDAP Object in Documentum

To setup an LDAPConfig Object and import the users you may follow the steps documented in the eContent Server Administrator Guide, Appendix C, "Using an LDAP Directory Server". However, please note the following three items that are not detailed in the documentation.

1. You must provide an email address in the LDAP server and LDAP Configuration object in Documentum Administrator (DA).

2. Use groupofuniqueNAMES instead of groupofuniquemembers. The groupofuniquenames is defined in the schema, where as the groupofuniquemembers is not.





STEPS FOR TESTING LDAP FUNCTIONALITY:

Below is the procedure for testing the LDAP functionality briefly without checking in detail.

1. Install Netscape Directory Server on NT or Unix. A schema is made automatically when you install LDAP server.


2. Add a user and group into LDAP Directory Server, by Netscape LDAP Directory Server, and PLEASE PROVIDE EMAIL ADDRESS.


3. Make an LDAP Configuration object in a Docbase, using DA. Below is a sample of the setting:



Hostname : your computer name or IP address where LDAP server runs,

Port : default 3891 Note but it depends on your LDAP Server so
consult the LDAP team for this.

Person Object Class : person

Group Object Class : groupofuniquenames [NOT groupofuniquemembers]

Person Search Base : o=xyz.com (This is the domain name of the computer where LDAP server is installed.)

Group Search Base : o=xyz.com (Same as Person Search Base)


Enabling LDAP Changelog Information : uncheck


Name : uid

OS Name : uid

Email Address : mail


4. Restart Docbase


5. Run LDAPSynchornize job


6. Check whether users and groups defined in LDAP server are imported into the Docbase in DA.


7. Try to connect to the Docbase with Intranet Client (IC) or Desktop Client (DTC) as the user defined in LDAP Server.



Again, the following items are correction or clarification to the documentation:

1. Provide an email address in the LDAP server and LDAP Configuration object in Documentum Administrator (DA).

2. Use groupofuniqueNAMES instead of groupofuniquemembers. The groupofuniquenames is defined in the schema, where as the groupofuniquemembers is not.

3. Restart Docbase once you have created a LDAP Configuration object.

Please feel free to given feedback/comments/suggestions/improvements.

Story behind when a document gets deleted in Documentum Repository

0

Category: ,

When a document is deleted from Documentum, only the document's metadata is deleted. The content itself (i.e. the object pointed to by the dmr_content object) remains in the content storage area until the IAPI script generated by the dmclean utility is run. If the dmclean utility has not been run, it is still possible to recover the object's content file.

Note: The document's attributes cannot be recovered without going back to a database backup. Below is an outline of a strategy which can be used to do this recovery.


Recovering the Information of the deleted Object

1. The Following DQL will retrieve the object_ids that correspond to the content objects that have been deleted

select r_object_id from dmr_content where any parent_id is null

If you want to retrieve data for a particular client, you can extract the following information to create your DQL:-

a) What was the name of the object? This is the object_name attribute.

b) What format was the document? This is the full_format attribute.

c) What was the date/time of the last checkin? This is the set_time attribute.

d) What was the name of the client machine where the file was last checked in? This is the set_client attribute.


The DQL that will be created in this case according to your search criteria is as follows:-

select r_object_id from dmr_content where any parent_id is null and
set_client = and full_format = and set_time > DATE('')


2. Using the Object Ids recovered the following IAPI needs to be executed to pick up the path and relevant information of the Object under consideration :-

API> apply,c,,GET_PATH
API> next,c,q0
API> get,c,q0,result

This returns the file system path to the object. For example:

/disk2/dm20/data/solar20o/content_storage_01/00000065/80/00/14/07


How to restore the Document

To be able to restore the document back to its original location, you will need to firstly create a document object of the deleted type using the IAPI :

Creating a new dm_document object using IAPI
API> create,c,dm_document
0900006580005a65.


Setting the name of the Deleted object
API> set,c,,object_name
SET> Restored Document
OK

Linking the deleted file to the newly created object
API> setfile,c,,/disk2/dm20/data/solar20o/content_storage_01/000000
65/80/00/14/07,
OK

Linking the object to a directory in the Cabinets where the file will be restored
API> link,c,,/Temp
OK

Saving the file
API> save,c,

This will restore the deleted document at the location /Temp with the name Restored Document

Your data center deserves the greatest performance leap in Intel server history.

0

Category:

Imagine the world's fastest enterprise and HPC server replacing 20 single-core systems in your data center—without a performance hit. Now imagine your CTO's face when presented with your new consolidation plan of ROI in less than one year1.

Enter the new Intel® Xeon® 7500 processor series. Launched with record-shattering performance2 and more than 20 new reliability features, this system is built for highly-parallel, data-intensive and mission-critical workloads or super node high-performance computing software. With up to ~1 terabyte of memory for a 4-socket system, unprecedented scalability, and improved performance-per-watt, you have crazy advanced technology at your disposal.

Optimized for large-scale flexible virtualization in the data center, the Intel Xeon 7500 processor delivers up to 3.7X greater VM performance and features automatic app virtualization functions built in to the CPU. Download the New Power for Data Center Virtualization solution brief by VMware and Intel to learn more.

Ready to make the leap with business-critical apps? Get an in-depth look at virtualizing mission-critical apps with the scalability, availability, and near-native performance you require in the Virtualizing Business-Critical Applications white paper.

So stop imagining. Take the quantum leap of your career with intelligent computing.

refer:www.intel.com/performance/server/xeon_mp/world_record.jpg

How to delete large number of objects using iDQL and iAPI on Documentum Server?

3

Category: , , , , ,

Deleting bulk number of objects is a daily task in administrating the Documentum Server and its not possible to delete more than couple of thousands objects using DA or even iDQL because it takes lot of time or sometimes it hangs the server. So best way to do it is through create a text file of API commands for those objects which need to be deleted and i have compiled some steps to do that in easy way . It can be automated by using writting some job but if you need to delete instantenously by running script . Here is the solution.

1:- Create a dql file to extract object_ids of the objects which you want to delete

example

Name this file something like "extract_objectIds.dql"
---------------------------------------
select 'destroy,c,'as destroy,r_object_id from dm_document
go
quit

---------------------------------------
2:- Create a batch file to run this dql command created in above file named like

extract_objectId.bat

----------------------------------------
@echo off
setlocal & pushd


rem -=-=-=-=-=-=-=-=-=--
rem - Set these values -
rem -=-=-=-=-=-=-=-=-=--
set DOCBASE_NAME=xxxx
set DCMT_SUPERUSER=yyyy
set DCMT_SUPERUSER_PW=zzzzz

echo
"%DM_HOME%\bin\idql32" %DOCBASE_NAME% -U%DCMT_SUPERUSER% -P%DCMT_SUPERUSER_PW% -Rextract_objectIds.dql >object_Ids.log
:EOF
endlocal & popd

-----------------------------------

3:- Extract all the API commands from the log you got in previous step in your objectid.log into a new file called "destroy_objects.api"
for example , your object_Ids.log must be containing data like

destroy,c, 09xxxxxxxxxxxxxx
destroy,c, 09xxxxxxxxxxxxxx
destroy,c, 09xxxxxxxxxxxxxx
destroy,c, 09xxxxxxxxxxxxxx

4: Create a new batch file or update the above batch file and run iapi command file using the batch file.

New Batch file example (destroy_objects.bat)

----------------------------------------------
@echo off
setlocal & pushd

rem // $Id$
rem -=-=-=-=-=-=-=-=-=--
rem - Set these values -
rem -=-=-=-=-=-=-=-=-=--
set DOCBASE_NAME=xxxx
set DCMT_SUPERUSER=yyyy
set DCMT_SUPERUSER_PW=zzzzz


rem Destroying Objects
echo
echo.
""%DM_HOME%\bin\iapi32" %DOCBASE_NAME% -U%DCMT_SUPERUSER% -P%DCMT_SUPERUSER_PW% -Rdestroy_objects.api -lobjectsDeleted.log

:EOF
endlocal & popd
------------------------------------------------

Run this batch file by setting your Docbase Name , UserId, Password.

Thats it.

Happy Destroying Objects.

Note:

Important queries related to Fulltext Indexing Server configuration

0

Category:

These are few important queries which are helpful while you are installing Fulltext Indexing Server .

SELECT r_object_id,index_name,ft_engine_id,is_standby
from dm_fulltext_index

The query returns two rows, one with an index name ending in 00, which represents the
primary index, and one with an index name ending in 01, which represents the standby index

SELECT r_object_id,object_name FROM dm_ftengine_config