Reading and Writing from Windows Registry

The following is an example of reading and writing from Windows Registry using C++. The code have been created based on examples from MSDN. I have provided the MSDN references in the code in case you want to lookup the documentation.


//Program tested on Microsoft Visual Studio 2008 - Zahid Ghadialy
//This program shows example of reading and writing from registry
#include <windows.h>
#include <iostream>

using namespace
std;

#define WIN_32_LEAN_AND_MEAN

void
writeToRegistry(void)
{

DWORD lRv;
HKEY hKey;

//Check if the registry exists
//http://msdn.microsoft.com/en-us/library/ms724897(VS.85).aspx
lRv = RegOpenKeyEx(
HKEY_CURRENT_USER,
L"Software\\Zahid"
,
0
,
KEY_WRITE,
&
hKey
);


if
(lRv != ERROR_SUCCESS)
{

DWORD dwDisposition;

// Create a key if it did not exist
//http://msdn.microsoft.com/en-us/library/ms724844(VS.85).aspx
lRv = RegCreateKeyEx(
HKEY_CURRENT_USER,
L"Software\\Zahid"
, //"Use Multi-Byte Character Set" by using L
0,
NULL,
REG_OPTION_NON_VOLATILE,
KEY_ALL_ACCESS,
NULL,
&
hKey,
&
dwDisposition
);


DWORD dwValue = 1;

//http://msdn.microsoft.com/en-us/library/ms724923(VS.85).aspx
RegSetValueEx(
hKey,
L"Something"
,
0
,
REG_DWORD,
reinterpret_cast
<BYTE *>(&dwValue),
sizeof
(dwValue)
);


//http://msdn.microsoft.com/en-us/library/ms724837(VS.85).aspx
RegCloseKey(hKey);
}
}


void
readValueFromRegistry(void)
{

//Example from http://msdn.microsoft.com/en-us/library/ms724911(VS.85).aspx

HKEY hKey;

//Check if the registry exists
DWORD lRv = RegOpenKeyEx(
HKEY_CURRENT_USER,
L"Software\\Zahid"
,
0
,
KEY_READ,
&
hKey
);


if
(lRv == ERROR_SUCCESS)
{

DWORD BufferSize = sizeof(DWORD);
DWORD dwRet;
DWORD cbData;
DWORD cbVal = 0;

dwRet = RegQueryValueEx(
hKey,
L"Something"
,
NULL,
NULL,
(
LPBYTE)&cbVal,
&
cbData
);


if
( dwRet == ERROR_SUCCESS )
cout<<"\nValue of Something is " << cbVal << endl;
else
cout<<"\nRegQueryValueEx failed " << dwRet << endl;
}

else

{

cout<<"RegOpenKeyEx failed " << lRv << endl;
}
}


int
main()
{

writeToRegistry();
readValueFromRegistry();
return
0;
}


The output is as follows:

Difference between procedures and functions in C++

In very simple terms in C++ a procedure is a function with return type as void.

Generally speaking we use the term procedure to refer to a routine, like the ones above, that simply carries out some task (in C++ its definition begins with void). A function is like a procedure but it returns a value; its definition begins with a type name, e.g. int or double indicating the type of value it returns. Procedure calls are statements that get executed, whereas function calls are expressions that get evaluated.

A simple program to show the difference as follows:


//Program tested on Microsoft Visual Studio 2008 - Zahid Ghadialy
//This program shows difference between functions and procedures
#include<iostream>

using namespace
std;

//function
bool checkIfPositive(int x)
{

if
(x >= 0)
return
true;
return
false;
}


//procedure
void printIfPositive(int x)
{

bool
isPositive = checkIfPositive(x);
if
(isPositive)
cout<<"x is positive and its value is "<<x<<endl;
}


int
main()
{

printIfPositive(3);
printIfPositive(-54);
printIfPositive(710);
return
0;
}


The output is as follows:


Traditional SaaS vs Cloud enabled SaaS

Inspired by Gilad's great summary on the Cloud Programming model, I try to summarize the difference that I observe between the traditional SaaS model and the "cloud-enabled SaaS model". Although cloud providers advocates zero effort is need to migrate existing applications into the cloud, it is my belief that this "strict-port" approach doesn't fully exploit the full power of cloud computing. There are a number of characteristic that cloud is different from traditional data center environment, application which design along these characteristic will take more advantages from the cloud.

I believe a Cloud-enabled-Application should have the following characteristic in its fundamental design.

Latency Awareness

Traditional SaaS App typically run within a single data center and assume low latency among server components. Now in the cloud environment that span many distant geographic locations, but the assumption of low latency cannot hold any more. We need to be “smarter” when choosing where to deploy to avoid the situation of putting frequently communicating components between far-distant locations. “Cloud-enabled SaaS app” need to be aware of latency difference and built in self-configuring and self-tuning mechanism to cope with that.

Cost Awareness

Traditional SaaS app typically run on already purposed hardware where utilization efficiency is not a concern. Now with the “pay as you go” model, application need to pay more attention to its usage pattern and efficiency of underlying resources because it will affect the operation cost. Cloud-enabled SaaS application need to understand the cost model of different resources utilization (such as CPU cost may be very different from Bandwidth cost) and adjust their usage strategy to minimize the operation cost.

Security Awareness

Traditional SaaS app typically run on a fully trusted data center based on perimeter security. But in the Hybrid cloud model, the perimeter being drawn is very different now. Application need to carefully select where to store its data such that sensitivity will not be leaking. This involve careful determination of storage provider or use encryption for protection.

Capitalize on Elasticity

Traditional SaaS App is not used to large-scale growth / shrink of compute resources and typically haven’t designed well to handle how data get distributed to newly joined machines (in a growth scenario) or redistributed among remaining machines (in a shrink scenario). This ends up having a very inefficient use of network bandwidth and results in high cost and low performance. More sophisticated data distribution protocol that align with the growth and shrink dimension is needed for “Cloud-enabled SaaS app”

A Macro Pitfall Question

Assuming that two macros are defined in the following way
#define max1(a,b) a < b ? b : a
#define max2(a,b) (a) < (b) ? (b) : (a)

what would be the value of x in the following cases:
x = max1(i += 3, j);
x = max2(i += 3, j);

and why?

Assume that initial value of i = 5 and j = 7 in both the cases. What is the value of i and j after the Macro?

Answer:
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
In case of max1, x = 12, i = 12 and j = 7. The reason being the substitution will happen like this:
i += 3 < j ? j : i += 3
which using operator precedence rules and language rules means:
i += ((3 < j) ? j : i += 3). Since 3 < 7, i = i + j = 12 which is the same as x.

In case of max2, x = 11, i = 11 and j = 7. The reason being the substitution will happen like this:
(i += 3) < (j) ? (j) : (i += 3). Since 5+3 = 8 which is > 7, i+=3 will be executed again (2nd time) so 5+6 = 11 which is the value of i and x.

Measuring elapsed time in C++ using timeGetTime()

Continuing our theme of Performance Measurement by getting the elapsed time between different instances, today we look at another approach using timeGetTime() method. The code and approach is the same as GetTickCount() case except that the call is replaced.

So whats the difference and which one is better. timeGetTime() has a default resolution of around 5ms but by using the timeBeginPeriod(1) the accuracy can be made upto 1ms. GetTickCount accuracy and jitter cannot be guaranteed. timeGetTime() has more overhead than GetTickCount() so it should not be used in the case where the calls will be frequently made.

Another thing which may be obvious is that GetTickCount() actually calculates the time based on the number of clock interrupts and multiplies it by clock frequency. timeGetTime() reads a field called interrupt time which is updated by Kernel periodically.

Finally, if possible always use QueryPerformanceCounter() as thats better and receommended.



//Program tested on Microsoft Visual Studio 2008 - Zahid Ghadialy
//This program shows example of Getting Elapsed Time
#include <iostream>
#include <Windows.h>

using namespace
std;

unsigned long
startTime_;

void
startTime()
{

startTime_ = timeGetTime();
}


unsigned int
calculateElapsedTime()
{

unsigned int
diffInMilliSeconds = timeGetTime() - startTime_;
return
diffInMilliSeconds;
}


int
main()
{

//Increasing the accuracy of Sleep to 1ms using timeBeginPeriod
timeBeginPeriod(1); //Add Winmm.lib in Project
unsigned int diffTime = 0, lastTime = 0, newTime = 0;
startTime();
cout<<"Start Time = "<<calculateElapsedTime()<<endl;

Sleep(100);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 100ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;
lastTime = newTime;

Sleep(100);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 100ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;
lastTime = newTime;

Sleep(5);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 5ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;
lastTime = newTime;

Sleep(50);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 50ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;

timeEndPeriod(1); //Must be called if timeBeginPeriod() was called
return 0;
}


The output is as follows. Notice more reliable and jitter free output:

Measuring elapsed time in C++ using QueryPerformanceCounter()

Yesterday we saw a primitive approach to getting the elapsed time, today we will have a look at the most popular and standard way of getting the elapsed time using QueryPerformanceCounter and QueryPerformanceFrequency approach.

Unlike the GetTickCount approach, which is not very reliable and does not have good resolution, this approach is quite reliable and has very good resolution, often better than a ms. The only problem with this approach used to be that in old systems it may not be very reliable. For example in some old OS (before XP sp2) in case of multiple processors present, and if the clocks of both the processors are not very well synchronised (doe to buggy hardware) then you can get different results each time the QueryPerformanceCounter call is made. Another problem with some chipsets with regards to their power saving is that the frequency changes while we use GetPerformanceFrequency call only once during the program. This can result in incorrect timing being returned. There were some other problems being present as well but they have now all seem to be fixed either in firmware or in the OS. It is recommended that this calls should only be used in OS greater than or equal to Windows XP SP2.




//Program tested on Microsoft Visual Studio 2008 - Zahid Ghadialy
//This program shows example of Getting Elapsed Time
#include <iostream>
#include <Windows.h>

using namespace
std;

LARGE_INTEGER timerFreq_;
LARGE_INTEGER counterAtStart_;

void
startTime()
{

QueryPerformanceFrequency(&timerFreq_);
QueryPerformanceCounter(&counterAtStart_);
cout<<"timerFreq_ = "<<timerFreq_.QuadPart<<endl;
cout<<"counterAtStart_ = "<<counterAtStart_.QuadPart<<endl;
TIMECAPS ptc;
UINT cbtc = 8;
MMRESULT result = timeGetDevCaps(&ptc, cbtc);
if
(result == TIMERR_NOERROR)
{

cout<<"Minimum resolution = "<<ptc.wPeriodMin<<endl;
cout<<"Maximum resolution = "<<ptc.wPeriodMax<<endl;
}

else

{

cout<<"result = TIMER ERROR"<<endl;
}
}


unsigned int
calculateElapsedTime()
{

if
(timerFreq_.QuadPart == 0)
{

return
-1;
}

else

{

LARGE_INTEGER c;
QueryPerformanceCounter(&c);
return
static_cast<unsigned int>( (c.QuadPart - counterAtStart_.QuadPart) * 1000 / timerFreq_.QuadPart );
}
}


int
main()
{

//Increasing the accuracy of Sleep to 1ms using timeBeginPeriod
timeBeginPeriod(1); //Add Winmm.lib in Project
unsigned int diffTime = 0, lastTime = 0, newTime = 0;
startTime();
lastTime = calculateElapsedTime();
cout<<"Start Time = "<<lastTime<<endl;

Sleep(100);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 100ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;
lastTime = newTime;

Sleep(100);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 100ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;
lastTime = newTime;

Sleep(5);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 5ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;
lastTime = newTime;

Sleep(50);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 50ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;

timeEndPeriod(1); //Must be called if timeBeginPeriod() was called
return 0;
}


The output is as follows:

Measuring elapsed time in C++ using GetTickCount()

There are variety of ways to obtain the elapsed time in a program. We will look at some of the ways in the next few posts. The first approach is using the GetTickCount() method. It should be mentioned that this method is not very accurate and some people have gone to the extent of saying that this should be deleted from the standards. Nevertheless it is quite widely used for cases where high resolution is not required.




//Program tested on Microsoft Visual Studio 2008 - Zahid Ghadialy
//This program shows example of Getting Elapsed Time
#include <iostream>
#include <Windows.h>

using namespace
std;

unsigned long
startTime_;

void
startTime()
{

startTime_ = GetTickCount();
}


unsigned int
calculateElapsedTime()
{

unsigned int
diffInMilliSeconds = GetTickCount() - startTime_;
return
diffInMilliSeconds;
}


int
main()
{

//Increasing the accuracy of Sleep to 1ms using timeBeginPeriod
timeBeginPeriod(1); //Add Winmm.lib in Project
unsigned int diffTime = 0, lastTime = 0, newTime = 0;
startTime();
cout<<"Start Time = "<<calculateElapsedTime()<<endl;

Sleep(100);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 100ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;
lastTime = newTime;

Sleep(100);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 100ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;
lastTime = newTime;

Sleep(5);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 5ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;
lastTime = newTime;

Sleep(50);
newTime = calculateElapsedTime();
diffTime = newTime - lastTime;
cout<<"Time after 50ms Sleep = "<<newTime<<", Difference = "<<diffTime<<endl;

timeEndPeriod(1); //Must be called if timeBeginPeriod() was called
return 0;
}


The output is as follows:

Multi-tenancy in cloud computing

Followup on an interesting discussion in Cloud Computing discussion group. What is a tenant ? Is multi-tenancy an important feature of cloud ? Who are the participants and their roles in the cloud ecosystem ?

Participants in the cloud
In my model, a "SaaS provider" is the organization that provides a domain specific SaaS App to its users (e.g. SmugMug for photo sharing). In this case, the SaaS consumer is just any individual who has a SmugMug account. The SaaS provider may choose an infrastructure provider (e.g. Amazon) to host its SaaS App. In this example, SmugMug is a SaaS provider and Infrastructure consumer at the same time.


Definition of a Tenant
Now, who is the "tenant" in this picture. I think Amazon will consider SmugMug as a tenant. But I doubt SmugMug will consider its individual user a tenant.

But what if SmugMug offer a services to car manufacturers so they can store, organize and image process their photos, which will show up in the car manufacturer's website. Will SmugMug consider BMW a tenant ? I think the answer is "yes". So maybe the definition of a tenant is "my user who has her own users".

You can see there can be a value chain built up. So except the start and end point of this value chain, everyone is a "tenant" to its service provider.

Multi-tenancy
After we defne what a "tenant" is, what does "multi-tenancy" mean ? In my opinion, "multi-tenancy" is for the benefit of the service provider so they can manage the resource ultization more efficiently, but multi-tenancy is not to the tenant's advantage at all. In the fake example I gave above, would BMW prefers a multi-tenancy environment from SmugMug ? My guess is that BMW would in fact worry if their data is sitting together with their competitors in a shared infrastructure. I bet they would prefer an environment which is isolated as much as possible.

While "multi-tenancy" indicates that some infrastructure is shared, at what layers are things being shared can make a big difference. For example, Amazon AWS is multi-tenant at the hardware level in that its users may be sharing a physical machine. On the other hand, Force.com is multi-tenant at the DB level in that its users are sharing data in the same DB tables. And Amazon is relying on the hypervisor to provide the isolation between tenants while Force.com is relying on a query rewriter to do the same.

While "multi-tenancy" at the highest layer basically advocates a shared-DB approach, does it enables better collaboration or sharing between tenants ? I don't think so. I think all we need is to have an authentication model such that spontaneous workgroup can be formed and membership can be identified. Then it is just a matter of a requesting tenant to presents his membership to another tenant when making a SaaS service call. What I mean is they are using an SOA approach to access data, rather than directly access a shared-DB.

Instantiating a Multimap inside a class

The following is a very simple example of Instantiating a Multimap. This example was posted as a result of a comment on the actual Multimap example here.




//Program tested on Microsoft Visual Studio 2008 - Zahid Ghadialy
//This program shows use of multi-maps in a class
#include<iostream>
#include<map>
#include <string>

using namespace
std;

class
mapInstantiator
{

public
:
~
mapInstantiator();
void
createMultiMap(void);
void
insertElements(pair<string, int> element);
void
printer(void);
private
:
multimap<string, int> *phoneNums;
};


void
mapInstantiator::createMultiMap(void)
{

//Instantiate
phoneNums = new multimap<string, int>;
}


void
mapInstantiator::insertElements(pair<string, int> element)
{

phoneNums->insert(element);
}


void
mapInstantiator::printer(void)
{

cout<<"\n\nMultimap printer method"<<endl;
cout<<"Map size = "<<phoneNums->size()<<endl;
multimap<string, int>::iterator it = phoneNums->begin();
while
(it != phoneNums->end())
{

cout<<"Key = "<<it->first<<" Value = "<<it->second<<endl;
it++;
}
}


mapInstantiator::~mapInstantiator()
{

//Dont forget to delete the pointer
delete phoneNums;
}


int
main()
{

mapInstantiator aClass;
aClass.createMultiMap();

//Insert key, value as pairs
aClass.insertElements(pair<string, int>("Joe",123));
aClass.insertElements(pair<string, int>("Will",444));
aClass.insertElements(pair<string, int>("Joe",369));
aClass.insertElements(pair<string, int>("Joe",812));
aClass.insertElements(pair<string, int>("Will",4556));
aClass.insertElements(pair<string, int>("Smith",71));

aClass.printer();

return
0;
}


The Output is as follows:


Functions returning void

C++ truths discusses an interesting interview question:

"Can you write a return statement in a function that returns void?" The answer is "Yes! You can return void!"

The following is a simple program picked up from the same blog and modified showing a function returning void



//Program tested on Microsoft Visual Studio 2008 - Zahid Ghadialy
//This is a simple example of a function returning void
#include<iostream>

using namespace
std;

static
void foo (void)
{

cout<<"foo() called"<<endl;
}

static
void bar (void)
{

cout<<"bar() called"<<endl;
return
foo(); // Note this return statement.
}
int
main ()
{

cout<<"main() called"<<endl;
bar();
return
0;
}


The output is as follows:

This feature is very useful in case of Templates. Lets write a simple program that uses Templates:


//Program tested on Microsoft Visual Studio 2008 - Zahid Ghadialy
//This is a simple example of a Template returning void
#include<iostream>
//#include <typeinfo> - Some compilers may need this

using namespace
std;

template
<class T> T FOO (void)
{

cout<<"T FOO() called with T = "<<typeid(T).name()<<endl;
return
T(); // Default construction
}

template
<class T> T BAR (void)
{

cout<<"T BAR() called with T = "<<typeid(T).name()<<endl;
return
FOO<T>(); // Syntactic consistency. Same for int, void and everything else.
}

int
main (void)
{

cout<<"main() called"<<endl;
BAR<void>();
BAR<int>();
BAR<char>();
}



The output is as follows:

Skinny Straw in the Cloud Shake

There is recently an article by Bernard Golden talking about network constraint (bandwidth and latency) as well as the associated bandwidth usage cost continues to become one main obstacle in cloud computing.

There are two concerns here. One is about not meeting the application's performance goal (throughput and response time). The other is about the cost of running in the cloud. (receive a large phone bill from your cloud provider)

The goal is to reduce the total amount of data transfer. A number of cloud app design patterns can be used ...

How do you put the code and data together before the processing can start ?

Try to be as stateless as possible
There is zero data data transfer to be transferred if your component is stateless by nature. Following techniques are assuming that there are some unavoidable stateful components involved.

Move your data creation process into the cloud first
Instead of uploading huge volume of data from your data center into the cloud so processing can be started, can you move the data creation process into the cloud ? Of course, you need to carefully evaluate the security implications here.

Distribute the architecture of your data creation
If the subsequent processing is based on a parallel execution architecture, why not distribute the data creation processing also. This will save a data repartition step.

Move the code to the data
Code usually has a much smaller footprint than the data it processes. Therefore it is more economical to move processing logic to the data rather than downloading the data to process. Of course, we need to check to make sure the machine hosting the data has enough CPU power to execute the processing logic.

Do as much as possible along current partition
A typically parallel processing architecture partitions data along some dimensions, conduct the processing in parallel, and then repartition data along other dimensions, conduct the next stage of processing, and so on ...

See if you can rearrange the order of processing such that you can do as much as possible within the current partition. The goal is to minimize the number of repartitions where a lot of data transfer is needed.

Minimize data redistribution at grow/shrink
How do you redistribute data to newly joined VM such that the overall data transfer can be minimized ? For example, "consistent hashing" algorithm can be used such that data redistribution only happens within the neighbor of newly joined VM rather than every other existing VMs.

Conduct data redistribution in the background
Data redistribution should have an impact on performance but not accuracy. In other words, the newly joined VMs should be able to serve immediately while doing data redistribution in the background. The data redistribution algorithm (which may take a longer time to finish) also need to adapt to continuous joining VMs. In other words, data redistribution can be just an ongoing performance improvement process in a highly dynamic workload environment.

Place component with bandwidth cost in mind
Other than the amount of data being transferred (which should be minimized anyway), it is equally important to look into bandwidth cost. Typically the cloud provider will charge a substantial amount in bandwidth usage across the cloud boundary. Therefore, it is important to place the components such that if data transfer do need to occur, it will occur within the cloud rather than across the cloud boundary. This requires a careful analysis of the communication pattern among application components and group frequently communicating components so they will be deployed within the same cloud.

Migrate data as communication pattern changes
Communication pattern may change after the system is deployed. It is important to continuously monitor the actual communication patterns and determine if a migration is needed to minimize the bandwidth cost. It is important to consider the gain versus the cost of migration. Gain is estimated by multiplying the communication frequency with the time that the new communication pattern is going to persist. Cost is estimated by the total among of data redistribution traffic caused by component migration. And only when the migration cost is smaller than the gain will the migration take place.

Exploit Caching
Use a local cache to reduce the need of data access, especially if the data is relatively static.

Allow direct access to data
This is against the philosophy of SOA where the internal state should be encapsulated behind an API interface. In this model, when a client want to extract the data, it need to first make a request to the owning application, which then make a request to the DB, get the data, encode that into the web service response, and pass the result back to the client. Is network bandwidth is costly, it will be much more efficient if the client can have direct access to the DB.

Expose latency information to the application
Provide latency map so application can dynamically adjust their communication partners who they want to communicate with.

Suppress Compiler Warning using #pragma

Ocassionally the compiler can throw out warnings which may be informative to you but you do not want others to see. You can use a #pragma directive to suppress the warnings. An example of the code below:



//Program tested on Microsoft Visual Studio 2008 - Zahid Ghadialy

//This example shows how to suppress warnings using #pragma

#include<iostream>



using namespace
std;



class
error

{


public
:

error(string s)

{


info = s;

}


private
:

error();

string info;

};




#pragma warning( disable : 4290 )



int
someFunc(void) throw (error)

{


return
1;

}




#pragma warning( default : 4290 )



int
someOtherFunc(void) throw (error)

{


return
1;

}




int
main()

{




return
0;

}




Here, for 'someOtherFunc', the compiler will generate a warning:


warning C4290: C++ exception specification ignored except to indicate a function is not __declspec(nothrow)


but a similar warning for 'someFunc' wont be generated because we have already suppressed it using the #pragma.


Const Pointer and References Table

Interesting table courtesy of my colleague Simon Locke about different possibilities of using const along with pointers and references.

Definition

Read-only

Ownership passed

Allows NULL

Type&NoNoNo
const Type&YesNoNo
Type*NoYesYes
const Type*YesYesYes
Type* constNoNoYes
const Type* constYesNoYes

The definition field is the parameter being passed to a function or a return from the function. For example you can see an example here of the first definition of 'Type&'.

Const after a function name

Here is a classic example of putting const after a function name. What this means is that this function will not modify any private members of this class.




//Program tested on Microsoft Visual Studio 2008 - Zahid Ghadialy
//This example shows what happens if const is put after a function
#include<iostream>

using namespace
std;

class
ABC
{

public
:
int
func1(int a, int b);
int
func2(int a, int b) const;
private
:
int
x,y;
};


int
ABC::func1(int a, int b)
{

x = a, y = b;
cout<<"x = "<<x<<" and y = "<<y<<endl;
return
0;
}


int
ABC::func2(int a, int b) const
{

//x = a, y = b; - NOT POSSIBLE, Compile Error
cout<<"Cant change x and y"<<endl;
return
-1;
}


int
main()
{

ABC abc;
abc.func1(3, 7);
abc.func2(20, 40);
return
0;
}


The output is as follows:


Check out this stream