Вы находитесь на странице: 1из 15

Technical Terms

Bug and Enhancement Bug: Bug is an error, flaw, or a problem with the functionality of an application/program. It is an error or flaw in the application/program that may prevent it from working correctly or produce an incorrect or unintended result. Bug is the deviation from the design document. It is a coding error or a defect in the code of the application/program. It arises due to the omission or misinterpretation of a particular feature/condition/logic while developing the application. Enhancement: It is an improved, advanced, additional, new or sophisticated feature or functionality added or to be added to the existing application/program as desired by the end user. Enhancement relates to an improvement over the current credentials of a particular process/application/program. Enhancements are normally added when the existing functionalities/features of the application/program are not sufficient to produce the desired result. What is Regression Test? Regression tests to prove the existing functionality. functionality delivered in Regression Test areas will and Phase II. that the new/updated functionality has not compromised They are selected based on the areas impacted by the this project & are included in the overall test suite. be selected by comparing the related features in Phase I

What is Aggregation Rule? Aggregation Rules are created and edited in the Aggregation Rule Properties window. This window provides fields for the name, description, folder name and priority of the rule. The Name field is the name of the Rule itself, and the Description field where a brief description of the rule is added by the author. The folder name is the name of the Aggregation Folder that contains the resources that meet the evaluation criteria of the rule. The Priority field of the rule enables the author to prioritize the rule in relative relationship to the other rules that are enabled. The higher the number the higher the priority. For example, a rule with a priority of 5 is evaluated prior to a rule with a priority of 3. The default priority of all rules is 0. The Enabled check box flags the rule for evaluation. If checked the rule is evaluated during a CICSPlex discovery. If the Enabled check box is not selected, the rule is not available and consequently is not evaluated. This is a combination of Search Rules and Metadata that will update a particular Content Flag if the conditions are true. What is Sanity Testing? Sanity testing is cursory testing, and performed whenever cursory testing 'is' sufficient to prove the application is functioning according to specifications. Sanity testing is a subset of regression testing. It normally includes a set of core tests, such as basic GUI functionality to demonstrate connectivity to the database, application servers, and printers. This will be a short application walk through test to confirm the release has been delivered/installed/configured correctly.

What is SharePoint? Microsoft SharePoint products and technologies include browser-based collaboration and a document-management platform. These can be used to host web sites that access shared workspaces and documents, as well as specialized applications like wikis and blogs from a browser. Users can manipulate proprietary controls or pieces of content called web parts to create or modify sites. SharePoint is not intended to replace a full file server. Instead, it is targeted as a collaborative workspace, a tool for the management and automation of business processes, and a platform for social networking. Microsoft markets this as Collaboration, Processes, and People. SharePoint interface is through a web interface, such as a task list or discussion pane. SharePoint sites are actually ASP.NET 2.0 applications, which are served using IIS and use a SQL Server database as a data storage backend. All site content data is stored within a SQL Server database called WSS_Content. The term "SharePoint" collectively refers to two products, the platform and the services. WSS is the platform and is included with Windows Server, while MOSS provides additional services and is licensed separately. As of 2009 the most current of these two are: Windows SharePoint Services 3.0 (WSS) Microsoft Office SharePoint Server 2007 (MOSS) WSS comprises of the core functionality, with MOSS built on top to provide extra features. Previous versions of this software used different names (SharePoint Portal Server 2003, for example) but are referred to as "SharePoint". In the beginning, SharePoint was a mixed bag of products and technologies. Among them was Site Server in 1998. The SharePoint initiative was collectively called Tahoe. What is the difference between Databases and Spreadsheets? If a database is so much like a spreadsheet, why cant I just use a spreadsheet? Databases are actually much more powerful than spreadsheets in the way youre able to manipulate data. Here are just a few of the actions that you can perform on a database that would be difficult if not impossible to perform on a spreadsheet: Retrieve all records that match certain criteria Update records in bulk Cross-reference records in different tables Perform complex aggregate calculations

What is Correlation Level? The percentage of a content flag and value being present whenever a metadata is also there or whenever a search rule is also hit.

What are the Data Models in Document Repository? Data Model 1. Basic Document Repository Data Model 2. Strongly Typed data model (Document level) 3. Object Level Strongly Type elements. 1. Basic Document Repository Data Model All the filing parsers like SEC parser, SEDAR parser, ASX parser or web crawlers stores documents into this model. Document_tbl is a higher level entity in document repository when parser starts processing a file from the feed it will create a document. After that the parser checks the content of the file to decide the format of the data, for example if it finds tag <html> in the document it means its html format or if it finds <xml> tag it I xml format. Once it decides the format it will insert the row into Version_tbl with same formatId along with physical path of the file in the file server. After creating version it will parse the metadata from the document like cik, filingDate, period of report accession number etc. and insert those into DocumentElement_tbl as elements. The metadata some times need to be grouped into sets, for example we store Street address & state of the company as elements. If there are 2 addresses reported in the document we need to store 2 states and 2 streets, but how do we associate which street belongs to which state? This where element groupId comes into picture we tie the related element together with element groupId. So all the elements in one business address will have same elment groupId, so by matching elementGroupId we can relate which street is related to which state. 2. Strongly typed data model. The core data model is extended to a set of strongly typed element tables. There are 2 main reasons for extending this. While parsing document we parse the document we parse the raw data from document in character format because we dont know the forma of the data before parsing. For example filingDate can be reported in many formats we cant strictly store it in date data type while parsing. But parsing the raw data in text forma is prone to bad data. We can not really validate the data. This limitation leads to extend the Document Element to a set of document level strongly typed tables. Interpreting the raw metadata becomes difficult as different people may report metadata differently. For example SEC has a concept of formType which specifies what the document talks about we store it as an element. For similar thing in sedar there are 4 identifiers that need to be clubbed to find what kind of document it is. These kinds of specific reporting styles making the issuing logic as source specific. To make issuing conditions a much generic as possible we introduced ciq elements.

These ciq elements will be stored in strongly typed tables and represents ciq defined meaning and format for the metadata. We will be deriving the ciq elements from the source elements available in Documntlement_tbl through a process called metadata mapping. We have a page in CDRM for metadata mapping where in research team maps the source elements to ciqElements. Based on the mapping given by the research team the source elements will be copied to strongly typed tables with ciqElements. 3. Object Level Strongly Typed Model. We have similar object level strongly typed model. The only difference is instead of referring to Document_tbl it refers to DocumentToObjectRel_tbl. In this model we will store metadata at object level. For example if there are 2 companies linked to a document and have income statement in it. We will store metadata for the 2 companies separately in object level tables. I the income statement belongs to only one company we will store data against on company only. What is Aggregation Rule? This is a combination of Search Rules and Metadata that will update a particular Content Flag if the conditions are true. What is Backtesting? The process of testing a trading strategy on prior time periods. Instead of applying a strategy for the time period forward, which could take years, a trader can do a simulation of his or her trading strategy on relevant past data in order to gauge the its effectiveness. Most technical-analysis strategies are tested with this approach. What is BI Admin tool? BI Admin is an integrated MIS and Administration tool created to facilitate various tasks of CIQ middle and operational level management. It can generate various reports based on Standard and Subqueries of different Process Groups / Processes: Production Flow Statistics Online Issue Date Production Flow Statistics Online Done Date Online Statistics Employee Wise Statistics Production Time Details Move Documents VC Reports Hourly Production Counts Key Development Wise Details Entity Details What is BI Non-English Process?

The BIGP and Business Relations (BR) groups to process Public Companies in Hyderabad rely on Annual Reports filed to SEC or other stock exchanges. Generally, companies filing with SEC will submit their Annual Report in English, but a few Asian companies and other Non-Asian companies (which are not filing with SEC) submit filings to their respective Stock exchanges in their native language. As we aim at processing of these company details too in Business Intelligence process, we need a source for this. This source is known as BI Research document. BI Research document is created for BD, BR (Products, competitors, and noncompetitors), O&D, Compensation, and Ownership. BI Research documents are uploaded through CDRM-Translation Manager to the above mentioned data sets. What is Bugtracker? Bugtracker tracks the bugs. Usually a bug is an error, flaw, mistake, failure, or fault in a program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people. Bugtracker tracks not only bugs like technical and data issues, it also tracks requisitions for further enhancement of user applications and other processes. We have two Bugtrackers, One is operated globally from New York office for data issues and the other one is operated locally from Hyderabad office. The local Bugtracker is especially used for technical and other process related bugs. What is Celsus Application? Celsus is an application developed in New York. This application facilitates the user to upload documents to NY warehouse. Documents of ownership can also be viewed. User groups from HYD, NY, London, Manila, Argentina etc. can use this application. Search option is also available, but it is currently under testing and may take a couple of months to release this functionality. DD team is using this application our department. What is CIK? The Central Index Key (CIK) The Central Index Key (CIK) is used on the SEC's computer systems to identify corporations and individual people who have filed disclosure with the SEC. What is Collection Entity and Collection Process? Collection Entity: It refers to an object against which information is collected. With reference to PSBD, Company ID and Source Document ID are said to be some collection entity types. Collection Process: A particular date set for which information is to be collected is said to be Collection Process in BI Common Tracker. Private Company Short Business Descriptions and Public Company Short Business Description are examples for some collection processes in BI Common Tracker.

What is Connotate? Connotate Connotate is a provider of web-monitoring and web-mining solutions. Its machinelearning software transforms the infinite passive Web into a focused set of actionable and highly personalized data sources. The Connotate software uses automation tools and machine-learning Information Agents that can be trained to monitor, extract, repurpose, and integrate web content. Connotates Agent Community is a scalable platform driven by a machine-intelligent Agent platform. An intuitive Graphic User Interface (GUI) supports fast, easy configuration of Agents, and requires no programming. Information is readily customizable, and content is XML-enabled and delivered through alerts, emails, wireless devices, pagers and portals, or downloaded to databases and spreadsheets. The Connotate software applications are tailored to specific audiences, delivering personalized actionable intelligence unique to the individual end-user. In our case we would be receiving alerts via emails. Connotate Process (Summary) We would be utilizing the Connotate Agent Studio to create Agents for websites. An Agent can be assumed to be a human who will go to websites and check all updates in relation to them on a predefined schedule and alert us if there are any changes. In the Agent Studio we navigate to a website, visit the pages that we would like to receive updates/alerts on, extract the required information from these pages (as we might not require all the data present on a particular webpage), and also at the same time make it understand the design of the website. We extract the information on a webpage using a pre-defined or a user defined Schema having various Elements. We then save the Agent and give it a unique name. Finally, we configure the Agent on the Skill Settings page. Thereafter, the Agent is required to be submitted to the Catalog after which it is ready for Subscription and Publishing for it to begin working. A researcher would subscribe to and publish this Agent by going to the Agent Library . Thereafter, the Agent starts functioning and keeps checking the website every day (or as per the scheduling assigned to it) for changes on the website, which are then notified to the user in different ways through alerts. Connotate project is based on deploying the Web-based application software, Agent Community GEN2, provided by Connotate Technologies, a Web-monitoring and Webmining solutions provider. Agent Community GEN2 is Web-based application software, which helps the users to train content in a particular Web page and track the changes/updations in the Web page. The application also helps in extracting the current content from the trained Web page.

What is Content Search Tool? Content Search Tool Unlike Metadata mapping tool which takes a source-level approach to assign content flags based on specific metadata value, content search tool takes a document-level approach to assign content flags. Content Search Tool literally searches the content. The following four methods have been proposed for performing this search:

1. ISYS Index based search 2. SQL search 3. Regular Expression search 4. CIQ pattern search
How does the tool work? The idea is to start integrating the above-said search methods one by one. The first phase of the tool which is called Content Search Admin is under development using ISYS search engine which is third-party software. Using Content Search Admin, we can create search rules and run them only on a sample batch of documents. The search results are stored and displayed in the tool. If we find the rules effective we can run the rules on all the documents in the repository. In Phase II, we have to figure out how are going to use the search results in assigning content flags and update document elements in database so that they can be used in document-issuing processes. Here again, we want to assign content flags with confidence levels. There are both mandatory as wells as non-mandatory content flags. If confidence levels are 100% for all mandatory content flags then all those documents will be issued to relevant data collection processes without going through any manual metadata validation process. If confidence level is less than 100% for any of the mandatory content flags for any document that particular document will be issued to metadata validation team to assign relevant content flags. What is Correlation Statistics? The statistics showing correlation levels of different metadata and search rules with a particular content flag and value. What is Data feed? Data feed Data feed is a mechanism for users to receive updated data from data sources. It is commonly used by real-time applications in point-to-point [disambiguation needed] settings as well as on the world-wide web. The latter is also called Web feed. News feed is a popular form of Web feed. RSS feed makes dissemination of blogs easy. Product feeds play increasingly important role in e-commerce and internet marketing. Data feeds usually require structured data. But, at the present time

unstructured data, e.g. HTML pages, dominate the Web. As a result, data feeds have huge potential to make bigger impact on the Web in the future. Data feed: A data feed is an electronic transmission of inventory data from one server to another. Sometimes called uploads or exports, and usually to streamline data to third party advertisers like Autotrader, Cars or Autobytel. A data feed is simply a structured form of information about the products you carry. Data feeds are most commonly based on MS Excel, MS Access, or a text file with every value separated by a delimiter (commas, pipes, or tabs). A data feed forms the backbone from which comparison engines derive and display information about your products. Data feeds are helpful in (at least) three ways:

1. They can be used to ensure the success of your marketing campaign on a


comparison engine by optimizing your product listings. 2. Carefully creating and managing your data feeds can ensure that your customers have access to complete and up-to-date information about your products and prices.

3. Feeds give you significant flexibility in creating compelling, keyword-rich

product descriptions versus "snipped" comparison engine descriptions, which are sometimes incoherent keyword gibberish.

What is Data Issue? Data Issue It is an outcome of possible violation of a data policy in processing a data item. The violation of data policy can be observed by an internal quality check person. More often these are raised by clients. Some times these can be requests. Examples include: In the present project, a data issue might be asking to include a fund company name in the Advised Fund Companies list or asking to give the primary business name properly where the existing one is a legal name. What is data mining? Data mining is the use of automated data analysis techniques to uncover previously undetected relationships among data items. Data mining often involves the analysis of data stored in a data warehouse. Three of the major data mining techniques are regression, classification and clustering. What is Data Policy? It is a guiding principle or procedure of general applicability adopted in the collection, editing, organizing and presentation of a particular Data item. Example: In the present DD process, we should not create documents for SS active companies for PSBD, SS active company is a public company, those companies will be processed in public companies work flow.

What is Database? A database is a structured collection of records or data that is stored in a computer system. The structure is achieved by organizing the data according to a database model. The model in most common use today is the relational model. What is Database Tables? Just like Excel tables, database tables consist of columns and rows. Each column contains a different type of attribute and each row corresponds to a single record. For example, imagine that we were building a database table that contained names and telephone numbers. Wed probably set up columns named FirstName, LastName and TelephoneNumber. Then wed simply start adding rows underneath those columns that contained the data were planning to store. If we were building a table of contact information for our business that has 50 employees, wed wind up with a table that contains 50 rows. What is Document Repository? It is a repository of documents. A document here means SEC filings, SEDAR flings or any other electronic document from which we collect data for CapitalIQ. Architecturally Document Repository consist of 2 things. One is a SQL database which contains the database schema and associated data. And the other one is a file server which holds all the physical files associated to the database records. a) Where documents comes form? All the public companies have to file their financial information to the stock exchange where the company is listed. For example in United States the public companies files information with Securities and Exchange Commission in electronic format. We can download the file from www.sec.gov. Similarly in Australia companies files information with Australian stock exchange www.asx.com.au and in Canadian companies files with www.sedar.com. Each country has its own rules and regulatory authorities to which companies files public information. All those information is publicly available for download. But downloading the documents one by one manually from these huge sites is a big deal; there are some vendors who provide feed of this publicly available information; we will buy the feed from those vendors and process the feed and store it into our repository. We do have some in-house documents research team who does research for documents which feed vendors dont cover from individual websites. This research team will be manually adding documents they found from individual websites through CDRM web application (which will be eventually replaced with CELSUS another web application developed in NY). Another source of documents is through webcrawling we have separate development team for web crawling who works on automatically crawling the documents from websites. All these documents will be stored in the repository.

b) How do we store documents from feed? We have a schedules application to process the feeds we receive from vendors. We have separate applications for each different feed prcessing, for example for prcessing SEC feed we have SEC filing parser, for ASX feed ASX filing parser And for news feed DJ news importer. These parse will run in NY, they extract basic Metadata from the files and store into the DocumentRepository. What ever is stored in NY document repository is replicated to HYD DocumentRepository. Fling parser creates a document (here means a record in Document_tbl) for each file they process and also inserts a version record as its original version format based on the content available, the format could be TEXT, HTML, PDF or XML etc. c) How do we use documents for collection? Once the documents replicated to HYD, we will use them for data collection in various data sets. We will not be processing each and every document that exists in document repository; different data sets require different kinds of documents. For example financials team needs annual documents, people data team needs proxy documents. So we have some applications that push relevant documents to the collection teams one such tool is EDL. EDL does 2 things, one is it identifies the documents eligible for collecting financial data the documents should be in text format to collect data from it so if it finds any html content in the documents it will convert the document to text document. He other thing EDL do is it executes a procedure called CommonTrackerLoader_prc which will identify and issue eligible documents to BI collection processes. What is Foreign Keys? These keys are used to create relationships between tables. Natural relationships exist between tables in most database structures. Returning to our employees database, lets imagine that we wanted to add a table containing departmental information to the database. This new table might be called Departments and would contain a large amount of information about the department as a whole. Wed also want to include information about the employees in the department, but it would be redundant to have the same information in two tables (Employees and Departments). Instead, we can create a relationship between the two tables. Lets assume that the Departments table uses the Department Name column as the primary key. To create a relationship between the two tables, we add a new column to the Employees table called Department. We then fill in the name of the department to which each employee belongs. We also inform the database management system that the Department column in the Employees table is a foreign key that references the Departments table. The database will then enforce referential integrity by ensuring that all of the values in the Departments column of the Employees table have corresponding entries in the Departments table.

What is FTP? File Transfer Protocol (FTP). File Transfer Protocol is used to make files and folders publicly available for transfer over the Internet. What is Hit count? Number of times the search strings of a Search Rule is hit in a document. What is Integration Functional Test? These are functional tests to prove that the requirements for new functionality work as expected. This will be an iterative process, where we test these functionalities after every Test Build. This Approach will be covered over 2 Phases. Initially well be testing the High-level test cases that are derived from various areas of application. If all the above are working correctly, well jump to detailed test cases that are developed from the high-level test cases.

What is Metadata? Metadata is Data describing data. It is structured data which describes the characteristics of a resource. Ex: Document Elements containing details regarding the document like Period of document, company id, language, country etc. What is Object Linking? Object linking is a process of assigning companyId to the documents. Unless we have related companyId for the document we can not collect the data. This object linking is done matching the metadata to the ComparisonData.dbo.Symbol_tbl. We us separate symbolTypes for linking for different sources documents. For example we use symbolTypeId 21 (SEC CIK) for SEC documents linking and for SEDAR we use symbolTypeId 84 and for ASX 518. Symbol table is link table between outside identifiers to ciq identifiers like companyId. What is Primary key? The primary key of a relational table uniquely identifies each record in the table. It can either be a normal attribute that is guaranteed to be unique (such as Social Security Number in a table with no more than one record per person) or it can be generated by the DBMS (such as a globally unique identifier, or GUID, in Microsoft SQL Server). Primary keys may consist of a single attribute or multiple attributes in combination. Imagine we have a student records database that contains three tables. The first table, STUDENTS, contains a record for each student at the university. The second table, CLASSES, contains a record for each class session offered. The third table, ENROLLMENT, contains student enrollment records (e.g.

each record represents a single student enrolling in a single course). There would be multiple records for each student (representing all the classes that student is enrolled in) and multiple records for each class session (representing all the students enrolled in that class). A student's unique student ID number would be a good choice for a primary key in the STUDENTS table. The student's first and last name would not be a good choice, as there is always the chance that more than one student might have the same name. What is Production Patch? When we begin to use afresh applications after a tech release with high frequency, we may note problems that were not observed during regression testing. These can be compatibility issues between older and newer logics and enhancements etc. In such cases, developers go for production patch (a patch happens after a tech release) wherein critical bugs, glitches etc. are fixed or critical software-to-hardware or critical operating system compatibility issues are addressed. What is Public company Survey? (PCS) The Public company profiles, Officers and Directors (O&D) and Business Relationships (BR) groups in Hyderabad rely on Annual Reports and Key Developments as sources for the information that they gather. Information is often out of date or missing because the primary source of information, Annual Reports, only come out ones a year and companies are not required to provide comprehensive details and companies issue press releases on sporadically and idiosyncratically regarding changes in this information. This is especially true of companies that DO NOT file annual reports to the Securities and Exchange Commission (SEC) as non-US exchanges generally have less stringent requirements for companies to provide information about the nature and structure of their businesses. To increase the breadth, timeliness and quality of these data sets, we would like to augment the existing sources with periodic surveys emailed to the Investor Relations departments of the public companies that we cover. Our experience surveying investment firms and having surveyed companies for information on their competitors in the past leads us to believe that this method will provide us with a significant amount of useful information that is not available anywhere else. The workflow of Public Company Survey include four steps 1. Sending e-Mails 2. Receiving feedback e-Mails from companies, Screening the e-Mails and creating documents for valid documents 3. Release them to the pool by CTL push based on document flag. 4. Updating each dataset with reference to the document What is Quality Check? A quality check is a mechanism to remove bad data i.e. unwanted or the data against the policies. It can be done either automatically or manually. In either way, it ensures quality of the product. A quality check at every stage of the process ensures the quality of the product with different quality checking parameters. Examples include certain biased words or letters that are to be avoided etc.

What is Regression Test? Regression tests to prove the existing functionality. functionality delivered in Regression Test areas will and Phase II. What is Sanity Testing? that the new/updated functionality has not compromised They are selected based on the areas impacted by the this project & are included in the overall test suite. be selected by comparing the related features in Phase I

This will be a short application walk through test to confirm the release has been delivered/installed/configured correctly. What is SOW (statement of work)? Statement of work (SOW) is a document used in the Systems Development Life Cycle. A software vendor or services company will send a SOW to notify a client of work about to be undertaken and agreed pricing. It is a brief summary of financial aspects of a contract; the technical details should have already been fleshed out by this stage, but is not the case always. The purpose of a SOW is to detail the work requirements for projects and programs that have deliverables and/or services performed. What is Unit Testing?
In computer programming, unit testing is a procedure used to validate that individual units of source code are working properly. A unit is the smallest testable part of an application.

What is Web crawling?


A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner. Other terms for Web crawlers are ants, automatic indexers, bots, and worms[1] or Web spider, Web robot, orespecially in the FOAF communityWeb scutter[2]. It enables to down load documents automatically in to specific location from a site. This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches.

What is Workflow Loader? Workflow loader is a SQL scheduler that can issue documents to collection systems. Its more advanced than the current CTL and EDL which have limitation on metadata they can consider for issuing documents. For example CTL can only look formType, filingDate and periodOfReport for issuing documents we can not specify any other criteria because it cant evaluate other criteria. Workflow loader is designed to evaluate any kind of criteria; it is not restricted to specific element types like CTL. Currently workflow loader is issuing documents some specific processes (KeyDevs, Ownership, Managed Shares & Metadata validation). Eventually it will b replacing the CTL and EDL and will become the centralized document issuer.

Other Information
What is Industry-Classification? Industry-Classification is a process of classifying companies and enterprises according to the activity/activities in which they are engaged. Companies are assigned industry classification based on Capital IQs proprietary industry tree, which is a hierarchical system of business classifications. It is a combination of the Global Industry Classification, Standard Industry Classification, and North American Industry Classification systems, as well as some custom additions. The tree ranges in specificity from the broadest sector to product level classifications. What is Global Industry Classification Standard? The Global Industry Classification Standard (GICS) is used as a basis for certain Morgan Stanley financial market indexes. It was developed by Morgan Stanley Capital International (MSCI), a provider of global financial indices, products and services, and Standard & Poor's (S&P), a provider of global equity indices, financial data and investment services. The GICS structure consists of 10 sectors, 24 industry groups, 67 industries and 147 sub-industries. What is SIC? Standard Industrial Classification (SIC) The Standard Industrial Classification (abbreviated 'SIC') is a United States government system for classifying industries by a four-digit code. Established in the 1930s, it is being supplanted by the six-digit North American Industry Classification System, which was released in 1997; however certain government departments and agencies, such as the U.S. Securities and Exchange Commission (SEC), still use the SIC codes. What is Corporate Governance ?
Corporate Governance Corporate Governance includes the processes, customs, policies, and laws affecting the way a corporation is directed, administered or controlled. Need for Corporate Governance - Financial market stability - Investment - Economic growth Relationships among parties involved and the goals for which a corporation is governed Shareholders, Management, and Board of Directors. Other stakeholders include Employees, Customers & Suppliers, Banks & Other Lenders, and Regulators.

Framework:

- Requirements of regulatory authorities and stock exchanges - Rights and protection of shareholders and other parties - Board responsibilities, independence, and accountability - Committees and meetings - Transparency in auditing, accounting and public disclosures - Remuneration of executives and directors - Code of conduct and ethics

Вам также может понравиться