Recently, I was working on an AJAX UpdatePanel with multiple triggers. I noticed I received the following JavaScript error in IE7, but not FireFox:
'Sys.InvalidOperationException: Could not find UpdatePanel with ID 'xxx'. If it is being updated dynamically then it must be inside another UpdatePanel.'
My page was written as follows (with each trigger on its own line (to make it readable)):
<asp:UpdatePanel ID="UpdatePanel1" runat="server" UpdateMode="Conditional">
<ContentTemplate></ContentTemplate>
<Triggers>
<asp:AsyncPostBackTrigger ControlID="btnImg1" EventName="Click" />
<asp:AsyncPostBackTrigger ControlID="btnImg2" EventName="Click" />
<asp:AsyncPostBackTrigger ControlID="btnImg3" EventName="Click" />
<asp:AsyncPostBackTrigger ControlID="btnImg4" EventName="Click" />
</Triggers>
</asp:UpdatePanel>
I had read that adding the UpdateMode=Conditional and adding an specific EventName to each trigger would maybe resolve my problem, however, it did not, but I left it in just in case.
I had only noticed this problem in IE after I had added the fourth trigger. I also noticed if I removed the fourth trigger, there was no JavaScript error. So I then change my page to the following (put all the triggers on a single line):
<asp:UpdatePanel ID="UpdatePanel1" runat="server" UpdateMode="Conditional">
<ContentTemplate></ContentTemplate>
<Triggers>
<asp:AsyncPostBackTrigger ControlID="btnImg1" EventName="Click" /><asp:AsyncPostBackTrigger ControlID="btnImg2" EventName="Click" /> <asp:AsyncPostBackTrigger ControlID="btnImg3" EventName="Click" /><asp:AsyncPostBackTrigger ControlID="btnImg4" EventName="Click" />
</Triggers>
</asp:UpdatePanel>
This seemed to fix the problem, even moving it to two lines worked. So I guess it was a whitespace issue? Very strange...only in IE!
*note my content template it blank because I'm actually working with a swfobject to play a movie...so there is a lot more code here but I'm only showing the specific area that was broken...
Tuesday, December 14, 2010
Monday, November 22, 2010
Deleted SSL Certificate Request - Recreate Request
After importing a new certificate, I deleted it because the friendly name was blank. (little did I know that was a known error with a simple fix.) However, I figured I had done something wrong when I installed so I deleted the certificate, which left me with no pending certificate request to complete the install again.
So now, I had to either contact the CA and re-process the request, or somehow get my pending request back. I went for 'get my pending request back'.
After googling for this info, I was able to find some help on this. So these are the steps I did to accomplish this. (Windows 2008 Server/IIS 7)
1. Click Start, point to Run, type cmd, and then click OK.
2. Navigate to the directory where Certutil.exe is stored; by default, this is %windir%\system32.
3. Type the following command at the command prompt: certutil -addstore my "C:\junk\www_sitename_com.cer" *a little note "my" is a part of the command, at first i thought it was part of an example name.
You should see the following message somewhere in the message that follows: CertUtil: -addstore command completed successfully.
4. Get the Thumbprint of the certificate.
5. Start- type 'mmc'. File - Add/Remove Snapin. Add Certificates, Select Local Computer.
6. Next, go to: Certificates - Certificate Enrollment Requests - Certificate. Double-click the certificate.
7. Go to the Details tab, scroll down, copy the weird thumbprint value, paste into Notepad for reference.
8. Return to the Command prompt, type the following command: certutil -repairstore my "xx dd xx xx oo" (include your own thumbprint value in double quotes.)
Go back to the IIS and follow the steps to Complete Certificate request.
And, btw, don't forget to go back to the IIS manager, and select the new certificate for the SSL/HTTPS binding
for the site.
So now, I had to either contact the CA and re-process the request, or somehow get my pending request back. I went for 'get my pending request back'.
After googling for this info, I was able to find some help on this. So these are the steps I did to accomplish this. (Windows 2008 Server/IIS 7)
1. Click Start, point to Run, type cmd, and then click OK.
2. Navigate to the directory where Certutil.exe is stored; by default, this is %windir%\system32.
3. Type the following command at the command prompt: certutil -addstore my "C:\junk\www_sitename_com.cer" *a little note "my" is a part of the command, at first i thought it was part of an example name.
You should see the following message somewhere in the message that follows: CertUtil: -addstore command completed successfully.
4. Get the Thumbprint of the certificate.
5. Start- type 'mmc'. File - Add/Remove Snapin. Add Certificates, Select Local Computer.
6. Next, go to: Certificates - Certificate Enrollment Requests - Certificate. Double-click the certificate.
7. Go to the Details tab, scroll down, copy the weird thumbprint value, paste into Notepad for reference.
8. Return to the Command prompt, type the following command: certutil -repairstore my "xx dd xx xx oo" (include your own thumbprint value in double quotes.)
Go back to the IIS and follow the steps to Complete Certificate request.
And, btw, don't forget to go back to the IIS manager, and select the new certificate for the SSL/HTTPS binding
for the site.
Monday, November 15, 2010
IIS 7 Secure Files Type like .txt or .html
I've had an .Net 2.0/IIS 4 site running for a long time that was to provide access to documents, but the documents were to be categorized and secured. I did my best to obscure the file structure and document names and not allow bot files in. But I was never quite satisfied with this because you could not fully secure the files such as .doc, or .txt. There was potential to serve up these pages, very annoying.
My understanding of the IIS 7.0/4.0 Integrated Pipeline is that you can secure files other than .aspx. I started using IIS 7 months ago, but never really understood what this new integrated pipeline could do for me until recently. I had to dig to even figure this out on my own. sigh...
Anyhow, by putting your site in 4.0 Integrated Pipeline mode and adding a few configuration lines to a "Form secured site", you can now require security on those "other" file types. Yes, just what i needed!
It was pretty simple, and I am hoping I'm right on with my understanding. If I'm wrong please let me know!!
I just modified the Application Pool to use 4.0 Integrated Mode. Then added the following lines to the web.config, inside the system.webserver tag:
<modules>
<remove name="FormsAuthenticationModule" />
<add name="FormsAuthenticationModule" type="System.Web.Security.FormsAuthenticationModule" />
<remove name="UrlAuthorization" />
<add name="UrlAuthorization" type="System.Web.Security.UrlAuthorizationModule" />
<remove name="DefaultAuthentication" />
<add name="DefaultAuthentication" type="System.Web.Security.DefaultAuthenticationModule" />
</modules>
My understanding of the IIS 7.0/4.0 Integrated Pipeline is that you can secure files other than .aspx. I started using IIS 7 months ago, but never really understood what this new integrated pipeline could do for me until recently. I had to dig to even figure this out on my own. sigh...
Anyhow, by putting your site in 4.0 Integrated Pipeline mode and adding a few configuration lines to a "Form secured site", you can now require security on those "other" file types. Yes, just what i needed!
It was pretty simple, and I am hoping I'm right on with my understanding. If I'm wrong please let me know!!
I just modified the Application Pool to use 4.0 Integrated Mode. Then added the following lines to the web.config, inside the system.webserver tag:
<modules>
<remove name="FormsAuthenticationModule" />
<add name="FormsAuthenticationModule" type="System.Web.Security.FormsAuthenticationModule" />
<remove name="UrlAuthorization" />
<add name="UrlAuthorization" type="System.Web.Security.UrlAuthorizationModule" />
<remove name="DefaultAuthentication" />
<add name="DefaultAuthentication" type="System.Web.Security.DefaultAuthenticationModule" />
</modules>
Thursday, September 16, 2010
Tips: FedEx WebService for Rate Quotes C#/Asp.Net
After a long run of using the UPS quotes for online orders, we moved to FedEx. So that meant, I needed to update the ecommerce app to reflect our new shipper.
This task wasn't *too* hard since I had already written the UPS one about 4 years ago, but the FedEx one uses a WebService, for the UPS rates, I sent an HttpWebRequest. I don't know why they don't have webservice, maybe they do now...
Anyhow, hopefully these tips will help get you started on your way.
1. Get the WSDL. To do this, I downloaded the WSDL file from FedEx, this file includes the RateService_v9.wsdl file. I unzipped this file to my local webserver, then to my production webserver.
2. Create a reference to the WebService in Visual Studio. In my project, I right-clicked then selected Add Web Reference. Then I pointed the reference to my localhost (http://localhost/fedex/RateService_v9.wsdl) and gave it a name.
3. Requested a Test Key from FedEx that I plugged into the values Account Number, Meter Number, Key and Password.
4. Downloaded the FedEx example for getting Rate Service Quotes. Modified it to my liking. Here's a snippet:
RateRequest request = new RateRequest();
RateService service = new RateService(); // Initialize the service
request.WebAuthenticationDetail = new WebAuthenticationDetail();
request.WebAuthenticationDetail.UserCredential = new WebAuthenticationCredential();
request.WebAuthenticationDetail.UserCredential.Key = this._key; // Replace "XXX" with the Key
request.WebAuthenticationDetail.UserCredential.Password = this._password; // Replace "XXX" with the Password
//
request.ClientDetail = new ClientDetail();
request.ClientDetail.AccountNumber = this._accountNumber; // account number
request.ClientDetail.MeterNumber = this._meter; // meter number
//
request.TransactionDetail = new TransactionDetail();
request.TransactionDetail.CustomerTransactionId = "MY-RATE-REQUEST";
//
request.Version = new VersionId(); // automatically set from wsdl
if (packages != null && packages.Count > 0)
{
SetShipmentDetails(request, packages.Count);
int i = 0;
decimal totalWeight = 0.0M;
foreach (Package package in packages){
RequestedPackageLineItem rpli = new RequestedPackageLineItem();
Weight packageWeight = new Weight();
packageWeight.Value = (decimal)package.Weight;
packageWeight.Units = WeightUnits.LB;
rpli.Weight = packageWeight;
totalWeight += (decimal)package.Weight;
request.RequestedShipment.RequestedPackageLineItems[i] = rpli;
i++;
}
Weight weight = new Weight();
weight.Value = totalWeight;
weight.Units = WeightUnits.LB;
request.RequestedShipment.TotalWeight = weight;
}
.....
5. Once I was satisfied with the results, I requested a Production Key from FedEx then plugged in those values. I kept receiving a "Meter Number is missing or invalid".
This where I got tripped up. I had to change the URL being called in the app.config (that seems to have been created for me with properties in it). There is a url value that is referenced by the WebService.
However, even though I kept changing this value, it would continue to use the old value which was the test site, and of course, give me the same error - since my meter number was a production one not a test one.
Finally, I thought to open up the Properties - Settings.settings file. Right when I double-clicked the file to open it, the proprety inside there was updated with the new value I had in the app.config file.
I assume it was some sort of Visual Studio 2010 thing. It seems like it was supposed to auto-update but it didn't. Later, to be sure, I just updated it manually.
After that everything worked like a charm.
FYI:
(for SOAP Requests)
The test url is: https://wsbeta.fedex.com:443/web-services/rate
The production url is: https://gateway.fedex.com:443/web-services
This task wasn't *too* hard since I had already written the UPS one about 4 years ago, but the FedEx one uses a WebService, for the UPS rates, I sent an HttpWebRequest. I don't know why they don't have webservice, maybe they do now...
Anyhow, hopefully these tips will help get you started on your way.
1. Get the WSDL. To do this, I downloaded the WSDL file from FedEx, this file includes the RateService_v9.wsdl file. I unzipped this file to my local webserver, then to my production webserver.
2. Create a reference to the WebService in Visual Studio. In my project, I right-clicked then selected Add Web Reference. Then I pointed the reference to my localhost (http://localhost/fedex/RateService_v9.wsdl) and gave it a name.
3. Requested a Test Key from FedEx that I plugged into the values Account Number, Meter Number, Key and Password.
4. Downloaded the FedEx example for getting Rate Service Quotes. Modified it to my liking. Here's a snippet:
RateRequest request = new RateRequest();
RateService service = new RateService(); // Initialize the service
request.WebAuthenticationDetail = new WebAuthenticationDetail();
request.WebAuthenticationDetail.UserCredential = new WebAuthenticationCredential();
request.WebAuthenticationDetail.UserCredential.Key = this._key; // Replace "XXX" with the Key
request.WebAuthenticationDetail.UserCredential.Password = this._password; // Replace "XXX" with the Password
//
request.ClientDetail = new ClientDetail();
request.ClientDetail.AccountNumber = this._accountNumber; // account number
request.ClientDetail.MeterNumber = this._meter; // meter number
//
request.TransactionDetail = new TransactionDetail();
request.TransactionDetail.CustomerTransactionId = "MY-RATE-REQUEST";
//
request.Version = new VersionId(); // automatically set from wsdl
if (packages != null && packages.Count > 0)
{
SetShipmentDetails(request, packages.Count);
int i = 0;
decimal totalWeight = 0.0M;
foreach (Package package in packages){
RequestedPackageLineItem rpli = new RequestedPackageLineItem();
Weight packageWeight = new Weight();
packageWeight.Value = (decimal)package.Weight;
packageWeight.Units = WeightUnits.LB;
rpli.Weight = packageWeight;
totalWeight += (decimal)package.Weight;
request.RequestedShipment.RequestedPackageLineItems[i] = rpli;
i++;
}
Weight weight = new Weight();
weight.Value = totalWeight;
weight.Units = WeightUnits.LB;
request.RequestedShipment.TotalWeight = weight;
}
.....
5. Once I was satisfied with the results, I requested a Production Key from FedEx then plugged in those values. I kept receiving a "Meter Number is missing or invalid".
This where I got tripped up. I had to change the URL being called in the app.config (that seems to have been created for me with properties in it). There is a url value that is referenced by the WebService.
However, even though I kept changing this value, it would continue to use the old value which was the test site, and of course, give me the same error - since my meter number was a production one not a test one.
Finally, I thought to open up the Properties - Settings.settings file. Right when I double-clicked the file to open it, the proprety inside there was updated with the new value I had in the app.config file.
I assume it was some sort of Visual Studio 2010 thing. It seems like it was supposed to auto-update but it didn't. Later, to be sure, I just updated it manually.
After that everything worked like a charm.
FYI:
(for SOAP Requests)
The test url is: https://wsbeta.fedex.com:443/web-services/rate
The production url is: https://gateway.fedex.com:443/web-services
Labels:
Asp.Net,
C#,
FedEx WebService,
FedEx WSDL,
Rates Available
Wednesday, August 11, 2010
Active Reports Dynamic Header Label and Column Creation
For this task, I needed to modify an existing report to allow for dynamic headers and columns. I was able to accomplish this in the report_DataInitialize, although I believe I read the Report Start is the recommended place. When I tried using that, I found that I did not have the data I needed to calculate how many label and columns I would need. So far report_DataInitialize has worked just fine...keeping my fingers crossed.
This is a subreport by the way. Previously, I had the header labels in the parent report, so I did also have to add a group header to make this work right.
So pretty much, I create as many new text boxes I need for the report in a loop, making each have a unique id, and allow some spacing in between. Then I add the text box to the detail section at the defined point that has been 'calculated'.
For the header, I do the same, but create labels, and then add them to the group header.
I also populate the columns with data in the report_FetchData, using the same looping logic to find the text box name and then give it a value to display.
For the most part this wasn't too bad of a task. I took a little digging about but I think it works.
private void report_DataInitialize(object sender, System.EventArgs e)
{
if (this.Classroom.Books.Count > 0)
{
float i = 0.0F;
foreach (Book book in Classroom.Books)
{
bool isActive = book.Active ?? false;
if (isActive){
StringBuilder sb = new StringBuilder("bk_");
sb.Append(book.ID);
string fieldName = sb.ToString();
this.Fields.Add(fieldName);
TextBox tb = new TextBox();
tb.Name = fieldName;
tb.DataField = fieldName;
tb.Height = .2F;
tb.Width = 1.1F;
tb.Alignment = TextAlignment.Right;
i = (i == 0.0F) ? 1.6F : i + 1.1F;
this.Sections["detail"].Controls.Add(tb);
this.Sections["detail"].Controls[tb.Name].Location = new PointF(i, 0);
Label lbl = new Label();
lbl.Name = fieldName + "lbl";
lbl.Text = gp.Description;
lbl.Height = .2F;
lbl.Width = 1.1F;
lbl.Alignment = TextAlignment.Right;
this.Sections["groupHeader1"].Controls.Add(lbl);
this.Sections["groupHeader1"].Controls[lbl.Name].Location = new PointF(i, 0);
}
}
}
}
This is a subreport by the way. Previously, I had the header labels in the parent report, so I did also have to add a group header to make this work right.
So pretty much, I create as many new text boxes I need for the report in a loop, making each have a unique id, and allow some spacing in between. Then I add the text box to the detail section at the defined point that has been 'calculated'.
For the header, I do the same, but create labels, and then add them to the group header.
I also populate the columns with data in the report_FetchData, using the same looping logic to find the text box name and then give it a value to display.
For the most part this wasn't too bad of a task. I took a little digging about but I think it works.
private void report_DataInitialize(object sender, System.EventArgs e)
{
if (this.Classroom.Books.Count > 0)
{
float i = 0.0F;
foreach (Book book in Classroom.Books)
{
bool isActive = book.Active ?? false;
if (isActive){
StringBuilder sb = new StringBuilder("bk_");
sb.Append(book.ID);
string fieldName = sb.ToString();
this.Fields.Add(fieldName);
TextBox tb = new TextBox();
tb.Name = fieldName;
tb.DataField = fieldName;
tb.Height = .2F;
tb.Width = 1.1F;
tb.Alignment = TextAlignment.Right;
i = (i == 0.0F) ? 1.6F : i + 1.1F;
this.Sections["detail"].Controls.Add(tb);
this.Sections["detail"].Controls[tb.Name].Location = new PointF(i, 0);
Label lbl = new Label();
lbl.Name = fieldName + "lbl";
lbl.Text = gp.Description;
lbl.Height = .2F;
lbl.Width = 1.1F;
lbl.Alignment = TextAlignment.Right;
this.Sections["groupHeader1"].Controls.Add(lbl);
this.Sections["groupHeader1"].Controls[lbl.Name].Location = new PointF(i, 0);
}
}
}
}
Saturday, June 5, 2010
T-Mobile Hot Spot - Blue Phone Light Indicator
We have T-Mobile's hot spot at home for our phone line. It works fine most of the time, however, when there is a storm and the power goes out, we lose the phone line. And I don't realize for many days until I try to use my phone and there is no dial tone.
So after about 4 times of this (about 10 tries a piece), i think i have the steps down to get the phone light back on most reliably.
Turn off the power to all of it.
Disconnect the hot spot router from the cable modem.
Go ahead and disconnect the phone line (not sure if this matters)
Disconnect the cable coaxial cable from the cable modem
Plug the coaxial cable back in after oh about 15 seconds
Power on the cable modem, wait until the lights all stabilize, they seem to flash, and do something for a bit, then when the Cable indicator has been solid for a good few seconds
Power on the hot spot router
connect the router back up to the cable modem
connect the phone line, wait, it takes up to 2 minutes for the blue light to come on
If this does not work, repeat again and again and I promise it will eventually work!!! Every time I get mad and give up, I come back then the blue light comes on!!
So after about 4 times of this (about 10 tries a piece), i think i have the steps down to get the phone light back on most reliably.
Turn off the power to all of it.
Disconnect the hot spot router from the cable modem.
Go ahead and disconnect the phone line (not sure if this matters)
Disconnect the cable coaxial cable from the cable modem
Plug the coaxial cable back in after oh about 15 seconds
Power on the cable modem, wait until the lights all stabilize, they seem to flash, and do something for a bit, then when the Cable indicator has been solid for a good few seconds
Power on the hot spot router
connect the router back up to the cable modem
connect the phone line, wait, it takes up to 2 minutes for the blue light to come on
If this does not work, repeat again and again and I promise it will eventually work!!! Every time I get mad and give up, I come back then the blue light comes on!!
Thursday, June 3, 2010
Set up FTP Site on Windows 2008, using IIS 6.0!
It has been quite awhile since I've posted. I guess I've haven't done anything too interesting in the world of computers and programming until this week.
Okay, so my task this week was to move the current webserver from a Windows 2000 server to a new Windows 2008 server. Whoa! We're kinda behind! And oh my things have changed! I'll admit, at first, I totally complained off this new server layout. Hey, I knew how to navigate that old server and I kind of liked it, BUT it was dying, dying, dying. Soooo bye-bye my little server, HELLO 2008!
Okay, so I didn't do all this in a week, I prepared in the previous weeks and switched the IP address over this week, and thanked my lucky stars it went pretty smooth, a few glitches, but not too bad. EXCEPT, the FTP site. I did not originally set up the FTP site, nor have I before, so this was all new to me.
So just in case you ever need to know, here are a few actions that somehow got the FTP site to finally run:
Turn on the FTP Service using the Turn Windows Features on or off. This is kind of hidden in my opinion. It is located inside the Server Manager as a Role in the Web Services area.
Create your FTP Site:
Inside the IIS Manager, Click FTP Sites, it will guide you through a link to open the IIS 6.0 manager.
Under FTP Site, add a new site. In my case i used the Isolated Users option, due to the fact that at this time we are using a Windows 2000 server to host our Active Directory, it appeared there was more setup to do on that server if I wanted to use the Active Directory, and since it will be upgraded soon, I chose to use the plain jane Isolated Users.
Now, some tricky parts.
Problem 1: Could not connect to the ftp site from my browser or command line or filezilla....
Resolution: Open Ports. Using an elevated command prompt, I ran the following command on the server:
netsh advfirewall firewall add rule name="FTP (non-SSL)" action=allow protocol=TCP dir=in localport=21
Also, I ran this one for FTP dynamic ports (umm..whatever that means, I just follow instructions):
netsh advfirewall set global StatefulFtp enable
Next, I had to open up the ports in the Firewall, so I opened up the Firewall, Server Manager - Go To Firewall Properties - Windows Firewall Properties - Inbound Rules
Add a new rule. (located on the right side of interface);
Under rule type select the radio for "Port" and hit next
Select the radio for "Specific Local Ports". Typed the range out separating each port with a comma (I think I did 5500-5525). And hit next
Then, I set up the passive ftp port range, again whatever, just follow the instructions:
since, i'm in IIS 6.0, in an elevated command prompt, go to c:\inetpub\AdminScripts, then run:
adsutil.vbs set /MSFTPSVC/PassivePortRange "5500-5525"
Okay, so port now open.
Problem 2: Setting up an FTP folder. This is where it is so complicated until you know what the heck is going on.
In this site, I need to allow anonymous access and allow users to log in to certain folders.
When setting up these folder, you have to be aware that there are Virtual Directories and Physical Directories, these are coupled together, especially in naming convention! It appears Windows FTP does some sort of name matching, for the domain name and user name, or just the user name for local users.
The physical directory:
For anonymous users, you set up the following under your root ftp directory (mine was c:\inetpub\ftproot), maybe this isn't safe but it works...
in that folder, you must create this structure so that anonymous users are automatically "dropped" into there:
c:\inetpub\ftproot\LocalUser\Public
On the physical folder, the IUSR_XXXX account (account used to anonymously access the site), needs to have whatever appropriate permission, read, write, list on the public folder or folders created under it.
Next, my domain users needed access to their folders. To do this, i had to create a folder under the root named the same as the domain name:
c:\inetpub\ftproot\junk-domain-name
For each user, i created a folder named their login name:
c:\inetpub\ftproot\junk-domain-name\user-name
**on the physical folder add NTFS permissions for the specific user, with the appropriate permissions - read, write, whatever.**
Next set up the users network folder:
Inside the user's folder, i had to create a dummy folder
c:\inetpub\ftproot\junk-domain-name\user-name\dummy-folder-name
Okay now the Virtual Directory Hook-Up:
(we don't need to do anything else for the anonymous folder, apparently, Windows takes care of that after you create the right physical folders.)
for each domain user that has a folder, you will need to create a Virtual Directory that points to that folder and named the SAME as that folder name:
So, I created the VD: junk-domain-name, that points to c:\inetpub\ftproot\junk-domain-name
Also, for the access to the network folder, i had to create a VD, with the same name as the dummy folder name:
So, I created the VD: dummy-folder-name, that points to the NETWORK PATH \\network-file-share-name\junk-name
Ah, so I think those were the steps, this took me roughly 3 days to do. I thought it was going to be way easier than all this but i did learn alot!
Hope that helps someone!!
Okay, so my task this week was to move the current webserver from a Windows 2000 server to a new Windows 2008 server. Whoa! We're kinda behind! And oh my things have changed! I'll admit, at first, I totally complained off this new server layout. Hey, I knew how to navigate that old server and I kind of liked it, BUT it was dying, dying, dying. Soooo bye-bye my little server, HELLO 2008!
Okay, so I didn't do all this in a week, I prepared in the previous weeks and switched the IP address over this week, and thanked my lucky stars it went pretty smooth, a few glitches, but not too bad. EXCEPT, the FTP site. I did not originally set up the FTP site, nor have I before, so this was all new to me.
So just in case you ever need to know, here are a few actions that somehow got the FTP site to finally run:
Turn on the FTP Service using the Turn Windows Features on or off. This is kind of hidden in my opinion. It is located inside the Server Manager as a Role in the Web Services area.
Create your FTP Site:
Inside the IIS Manager, Click FTP Sites, it will guide you through a link to open the IIS 6.0 manager.
Under FTP Site, add a new site. In my case i used the Isolated Users option, due to the fact that at this time we are using a Windows 2000 server to host our Active Directory, it appeared there was more setup to do on that server if I wanted to use the Active Directory, and since it will be upgraded soon, I chose to use the plain jane Isolated Users.
Now, some tricky parts.
Problem 1: Could not connect to the ftp site from my browser or command line or filezilla....
Resolution: Open Ports. Using an elevated command prompt, I ran the following command on the server:
netsh advfirewall firewall add rule name="FTP (non-SSL)" action=allow protocol=TCP dir=in localport=21
Also, I ran this one for FTP dynamic ports (umm..whatever that means, I just follow instructions):
netsh advfirewall set global StatefulFtp enable
Next, I had to open up the ports in the Firewall, so I opened up the Firewall, Server Manager - Go To Firewall Properties - Windows Firewall Properties - Inbound Rules
Add a new rule. (located on the right side of interface);
Under rule type select the radio for "Port" and hit next
Select the radio for "Specific Local Ports". Typed the range out separating each port with a comma (I think I did 5500-5525). And hit next
Then, I set up the passive ftp port range, again whatever, just follow the instructions:
since, i'm in IIS 6.0, in an elevated command prompt, go to c:\inetpub\AdminScripts, then run:
adsutil.vbs set /MSFTPSVC/PassivePortRange "5500-5525"
Okay, so port now open.
Problem 2: Setting up an FTP folder. This is where it is so complicated until you know what the heck is going on.
In this site, I need to allow anonymous access and allow users to log in to certain folders.
When setting up these folder, you have to be aware that there are Virtual Directories and Physical Directories, these are coupled together, especially in naming convention! It appears Windows FTP does some sort of name matching, for the domain name and user name, or just the user name for local users.
The physical directory:
For anonymous users, you set up the following under your root ftp directory (mine was c:\inetpub\ftproot), maybe this isn't safe but it works...
in that folder, you must create this structure so that anonymous users are automatically "dropped" into there:
c:\inetpub\ftproot\LocalUser\Public
On the physical folder, the IUSR_XXXX account (account used to anonymously access the site), needs to have whatever appropriate permission, read, write, list on the public folder or folders created under it.
Next, my domain users needed access to their folders. To do this, i had to create a folder under the root named the same as the domain name:
c:\inetpub\ftproot\junk-domain-name
For each user, i created a folder named their login name:
c:\inetpub\ftproot\junk-domain-name\user-name
**on the physical folder add NTFS permissions for the specific user, with the appropriate permissions - read, write, whatever.**
Next set up the users network folder:
Inside the user's folder, i had to create a dummy folder
c:\inetpub\ftproot\junk-domain-name\user-name\dummy-folder-name
Okay now the Virtual Directory Hook-Up:
(we don't need to do anything else for the anonymous folder, apparently, Windows takes care of that after you create the right physical folders.)
for each domain user that has a folder, you will need to create a Virtual Directory that points to that folder and named the SAME as that folder name:
So, I created the VD: junk-domain-name, that points to c:\inetpub\ftproot\junk-domain-name
Also, for the access to the network folder, i had to create a VD, with the same name as the dummy folder name:
So, I created the VD: dummy-folder-name, that points to the NETWORK PATH \\network-file-share-name\junk-name
Ah, so I think those were the steps, this took me roughly 3 days to do. I thought it was going to be way easier than all this but i did learn alot!
Hope that helps someone!!
Thursday, March 25, 2010
Action Scrip3 Bitmap to ByteArray to .Net C#
Previously, I posted on how I cropped and re-sized an image in Flex. Now, I will take you through some code snippets that I used to send the data over to .NET.
Flex:
I sent data to .NET using a UrlRequestLoader, again re-used code from some great person on the 'net.
public function saveFile(imageData:BitmapData):void{
// method accepts raw JPEG data as ByteArray as input
// create a URLRequest object pointed at the URL of the remote upload script
var request:URLRequest = new URLRequest(_uploadHandler+"?iKey="+_userId.toString());
// the call we will make is a standard HTTP POST call
request.method = "POST";
// this enables us to send binary data for the body of the HTTP POST call
request.contentType = "application/octet-stream";
var urlLoader:URLLoader = new URLLoader();
urlLoader.addEventListener(Event.COMPLETE,uploadPhotoHandler);
// the loader is the data being sent along to the server. Its dataFormat property lets us specify the format for the body, which, in our case, will be BINARY data
urlLoader.dataFormat = URLLoaderDataFormat.BINARY;
// the data property of our URLRequest object is the actual data being sent to the server, which in this case is the photo JPEG data
var myEncoder:JPEGEncoder = new JPEGEncoder(100);
var byteArray:ByteArray = myEncoder.encode(imageData);
request.data = byteArray;
urlLoader.load(request);
}
On the .NET side I had a handler setup to process this call. This part was easy to implement but took some digging to figure out all the right pieces.
It boiled down to getting the data out of the request using the BinaryReader. (actually I never knew that there was on attached to a Request). Then I used the File class to write out the image data.
.NET:
public void ProcessRequest(HttpContext context)
{
if (context.Request.TotalBytes > 0)
{
int userId = Convert.ToInt32(context.Request["iKey"]);
byte[] data = context.Request.BinaryRead(context.Request.TotalBytes);
string basePath =ConfigurationManager.AppSettings["Path"];
string uploadPath = context.Server.MapPath(basePath);
if (!System.IO.Directory.Exists(uploadPath))
System.IO.Directory.CreateDirectory(uploadPath);
if(data!=null)
{
string fileName = uploadPath + "xxx.jpg"));
File.WriteAllBytes(fileName, data);
}
}
}
Flex:
I sent data to .NET using a UrlRequestLoader, again re-used code from some great person on the 'net.
public function saveFile(imageData:BitmapData):void{
// method accepts raw JPEG data as ByteArray as input
// create a URLRequest object pointed at the URL of the remote upload script
var request:URLRequest = new URLRequest(_uploadHandler+"?iKey="+_userId.toString());
// the call we will make is a standard HTTP POST call
request.method = "POST";
// this enables us to send binary data for the body of the HTTP POST call
request.contentType = "application/octet-stream";
var urlLoader:URLLoader = new URLLoader();
urlLoader.addEventListener(Event.COMPLETE,uploadPhotoHandler);
// the loader is the data being sent along to the server. Its dataFormat property lets us specify the format for the body, which, in our case, will be BINARY data
urlLoader.dataFormat = URLLoaderDataFormat.BINARY;
// the data property of our URLRequest object is the actual data being sent to the server, which in this case is the photo JPEG data
var myEncoder:JPEGEncoder = new JPEGEncoder(100);
var byteArray:ByteArray = myEncoder.encode(imageData);
request.data = byteArray;
urlLoader.load(request);
}
On the .NET side I had a handler setup to process this call. This part was easy to implement but took some digging to figure out all the right pieces.
It boiled down to getting the data out of the request using the BinaryReader. (actually I never knew that there was on attached to a Request). Then I used the File class to write out the image data.
.NET:
public void ProcessRequest(HttpContext context)
{
if (context.Request.TotalBytes > 0)
{
int userId = Convert.ToInt32(context.Request["iKey"]);
byte[] data = context.Request.BinaryRead(context.Request.TotalBytes);
string basePath =ConfigurationManager.AppSettings["Path"];
string uploadPath = context.Server.MapPath(basePath);
if (!System.IO.Directory.Exists(uploadPath))
System.IO.Directory.CreateDirectory(uploadPath);
if(data!=null)
{
string fileName = uploadPath + "xxx.jpg"));
File.WriteAllBytes(fileName, data);
}
}
}
ActionScript 3 - Crop and Resize Image
I needed to allow a user to select an image on their computer, crop that image down to whatever area they preferred, then save the image to the server. However, we wanted to limit the size of the image to 300 by 300. So if they uploaded an image of 1000 by 1000, the selected area needed to be reduced to a max of 300 by 300.
Okay, so the front end is Flash/Flex3, and the server end it C#.
The Flex part:
First to crop an image, I used the free tool called ImageCropper from FlexBlocks. This snazzy little tool made it very easy to crop the image, and then view the cropped image in an image component.
The drawback was that it did not seem to resize the underlying BitmapData like I needed. At least, I could not tell from any of the documentation that it would do this for me.
So I found this code on the 'net that did the most accurate job of resizing the image.
//this is the imagecropper working in this function
private function doCrop():void {
// Get the cropped BitmapData
var loadBD:BitmapData;
loadBD = imageCropper.croppedBitmapData;
var croppedBitmap:Bitmap;
croppedBitmap = ResizeImage.resizeIt(loadBD, 300, 300) as Bitmap;
//set preview image component's source to cropped bitmap
croppedImage.source = croppedBitmap;
}
And here is the re-size function that someone wrote, I might have changed a few lines. Thanks to whoever wrote and shared this code...
public static function resizeIt(target:BitmapData, width:Number, height:Number):Bitmap{
var ratio:Number = target.width/target.height;
var targetHeight:Number = height;//now instead of setting the picture
//size we calculate what the size should be with two new variables.
var targetWidth:Number = targetHeight*ratio;
if(targetWidth>width){
targetWidth = width;
targetHeight = targetWidth/ratio;
}
var bmp:BitmapData = new BitmapData(targetWidth, targetHeight);
//create a bitmapData and pass it the size that the picture should be
//create a matrix that we'll use to scale the picture
var matrix:Matrix = new Matrix();
//set the scale for the matrix
matrix.scale((targetWidth/target.width), (targetHeight/target.height));
//draw our bitmapData using our matrix, smooth the picture using true
bmp.draw(target,matrix,null,null,null,true);
var bitmap:Bitmap = new Bitmap(bmp);
//create a bitmap and pass it our resized bitmapData
bitmap.x = width/2-bitmap.width/2;//set the bitmap position
bitmap.y = height/2-bitmap.height/2;//set the bitmap position
return bitmap;//return the bitmap
}
When I get some time, I will post how I did the second part of this task which was sending the bitmap data over to .NET (C#), reading into a file and saving it to the server! So in the end, it is possible to do all this, even if I don't understand every last bit of it :)
Hope that helps.
Okay, so the front end is Flash/Flex3, and the server end it C#.
The Flex part:
First to crop an image, I used the free tool called ImageCropper from FlexBlocks. This snazzy little tool made it very easy to crop the image, and then view the cropped image in an image component.
The drawback was that it did not seem to resize the underlying BitmapData like I needed. At least, I could not tell from any of the documentation that it would do this for me.
So I found this code on the 'net that did the most accurate job of resizing the image.
//this is the imagecropper working in this function
private function doCrop():void {
// Get the cropped BitmapData
var loadBD:BitmapData;
loadBD = imageCropper.croppedBitmapData;
var croppedBitmap:Bitmap;
croppedBitmap = ResizeImage.resizeIt(loadBD, 300, 300) as Bitmap;
//set preview image component's source to cropped bitmap
croppedImage.source = croppedBitmap;
}
And here is the re-size function that someone wrote, I might have changed a few lines. Thanks to whoever wrote and shared this code...
public static function resizeIt(target:BitmapData, width:Number, height:Number):Bitmap{
var ratio:Number = target.width/target.height;
var targetHeight:Number = height;//now instead of setting the picture
//size we calculate what the size should be with two new variables.
var targetWidth:Number = targetHeight*ratio;
if(targetWidth>width){
targetWidth = width;
targetHeight = targetWidth/ratio;
}
var bmp:BitmapData = new BitmapData(targetWidth, targetHeight);
//create a bitmapData and pass it the size that the picture should be
//create a matrix that we'll use to scale the picture
var matrix:Matrix = new Matrix();
//set the scale for the matrix
matrix.scale((targetWidth/target.width), (targetHeight/target.height));
//draw our bitmapData using our matrix, smooth the picture using true
bmp.draw(target,matrix,null,null,null,true);
var bitmap:Bitmap = new Bitmap(bmp);
//create a bitmap and pass it our resized bitmapData
bitmap.x = width/2-bitmap.width/2;//set the bitmap position
bitmap.y = height/2-bitmap.height/2;//set the bitmap position
return bitmap;//return the bitmap
}
When I get some time, I will post how I did the second part of this task which was sending the bitmap data over to .NET (C#), reading into a file and saving it to the server! So in the end, it is possible to do all this, even if I don't understand every last bit of it :)
Hope that helps.
Labels:
actionscript3,
bitmap rezie,
bitmapdata resize,
crop,
flexblocks,
imagecropper
Active Reports - Subreport data cut off
I call this experience a lesson learned. Active Reports warns of not touching controls in the Fetch_Data event, but I ignored it thinking a subreport isn't a control! And it caused me nothing but heartache to learn this lesson.
At first the report ran fine, I did not see the problem until I had alot more data to load, then I could see the subreport getting cut off. From looking at the report, it was clear the next report was being started, even though the first one's subreport data was not complete.
This is kind of how my report looked:
Library Name: My Library
Author Name: Joe Smith
Books
------------------------
My First Book
Rating A: X
Rating B: X
My Second Book
Rating A: X
Rating B: X
My Third Book
Rating A: X
Rating B: X
Total Rating: X
Page 1 of 3
**********************************
Library Name: My Library
Author Name: Pretend Name
Books
------------------------
My First Book
Rating A: X
Rating B: X
My Second Book
Rating A: X
Rating B: X
My Third Book
Rating A: X <----DATA CUT OFF HERE!!!!
Page 2 of 3
**********************************
Library Name: My Library
Author Name: Ellen Jones
Books
------------------------
My First Book
Rating A: X
Rating B: X
My Second Book
Rating A: X
Rating B: X
My Third Book
Rating A: X
Rating B: X
Total Rating: X
Page 3 of 3
**********************************
This is sort of what I had in the code:
void report_FetchData(object sender, DataDynamics.ActiveReports.ActiveReport.FetchEventArgs eArgs)
{
if(eArg.EOF==false)
{
int id = Convert.ToInt32(Fields["ID"].Value.ToString());
//get some data
List list = SomeCallToGetData(id);
ReportDetail rd = new ReportDetail();
this.subReport1.Report = rd;
this.subReport1.Report.DataSource = list;
}
}
The problem was that the Fetch_Data would spin through all the records, and this would not allow the subreport to be properly written out.
So to resolve my problem, I followed the instruction of "don't touch controls in the Fetch Data" and moved this logic to the Detail_Format - not sure if this the right way, but something lead me to believe this was the right....
Now, I use the Fetch_Data to gather all the data, then store the subreport data in a private class variable called '_list' for use in the Format_Detail like this:
public partial class SummaryReportAR6 : DataDynamics.ActiveReports.ActiveReport
{
private List _list;
public SummaryReportAR6 ()
{
InitializeComponent();
this.FetchData += new FetchEventHandler(report_FetchData);
this.detail.Format += new EventHandler(Detail_Format);
}
void report_FetchData(object sender, DataDynamics.ActiveReports.ActiveReport.FetchEventArgs eArgs)
{
if(eArg.EOF==false)
{
int id = Convert.ToInt32(Fields["ID"].Value.ToString());
//get some data, store for later use
_list = SomeCallToGetData(id);
}
}
void Detail_Format(object sender, System.EventArgs e)
{
//set report datasource
ReportDetail rd = new ReportDetail();
this.subReport1.Report = rd;
this.subReport1.Report.DataSource = _list;
}
}
hmmmm..I'm not really sure how the timing works out on this, but it seems to be working.
I will consider this one fixed until I see more problems.
Hope that helps someone out there.
At first the report ran fine, I did not see the problem until I had alot more data to load, then I could see the subreport getting cut off. From looking at the report, it was clear the next report was being started, even though the first one's subreport data was not complete.
This is kind of how my report looked:
Library Name: My Library
Author Name: Joe Smith
Books
------------------------
My First Book
Rating A: X
Rating B: X
My Second Book
Rating A: X
Rating B: X
My Third Book
Rating A: X
Rating B: X
Total Rating: X
Page 1 of 3
**********************************
Library Name: My Library
Author Name: Pretend Name
Books
------------------------
My First Book
Rating A: X
Rating B: X
My Second Book
Rating A: X
Rating B: X
My Third Book
Rating A: X <----DATA CUT OFF HERE!!!!
Page 2 of 3
**********************************
Library Name: My Library
Author Name: Ellen Jones
Books
------------------------
My First Book
Rating A: X
Rating B: X
My Second Book
Rating A: X
Rating B: X
My Third Book
Rating A: X
Rating B: X
Total Rating: X
Page 3 of 3
**********************************
This is sort of what I had in the code:
void report_FetchData(object sender, DataDynamics.ActiveReports.ActiveReport.FetchEventArgs eArgs)
{
if(eArg.EOF==false)
{
int id = Convert.ToInt32(Fields["ID"].Value.ToString());
//get some data
List
ReportDetail rd = new ReportDetail();
this.subReport1.Report = rd;
this.subReport1.Report.DataSource = list;
}
}
The problem was that the Fetch_Data would spin through all the records, and this would not allow the subreport to be properly written out.
So to resolve my problem, I followed the instruction of "don't touch controls in the Fetch Data" and moved this logic to the Detail_Format - not sure if this the right way, but something lead me to believe this was the right....
Now, I use the Fetch_Data to gather all the data, then store the subreport data in a private class variable called '_list' for use in the Format_Detail like this:
public partial class SummaryReportAR6 : DataDynamics.ActiveReports.ActiveReport
{
private List
public SummaryReportAR6 ()
{
InitializeComponent();
this.FetchData += new FetchEventHandler(report_FetchData);
this.detail.Format += new EventHandler(Detail_Format);
}
void report_FetchData(object sender, DataDynamics.ActiveReports.ActiveReport.FetchEventArgs eArgs)
{
if(eArg.EOF==false)
{
int id = Convert.ToInt32(Fields["ID"].Value.ToString());
//get some data, store for later use
_list = SomeCallToGetData(id);
}
}
void Detail_Format(object sender, System.EventArgs e)
{
//set report datasource
ReportDetail rd = new ReportDetail();
this.subReport1.Report = rd;
this.subReport1.Report.DataSource = _list;
}
}
hmmmm..I'm not really sure how the timing works out on this, but it seems to be working.
I will consider this one fixed until I see more problems.
Hope that helps someone out there.
Friday, March 5, 2010
Can't Access Localhost using IP Address or Computer Name
Well, I'm not sure when this feature on my pc broke, but it did and it WAS at some point working. I concluded that this can be a tricky little problem as I scoured the internet looking for an answer. Apparently, many people have this problem, and it is never really clear if they resolved it....or how they did.
I spent way too much time on this issue, but darn it I get really annoyed when something used to work, but then does not work anymore.
My problem was getting my locally served website to be accessed either by the ip address of my computer (http://xx.xx.xx.xx) or by the computer name (http://my-computer-name-here). On my computer, or another computer on the local network. When I would try these methods I got a page not found error. The same error occurred on other pc's too, make sense, but I had to hope...
I could access my site on my pc using the http://localhost. As a note, I'm on Vista but I don't think that was my problem. As a another note, I changed many things so I guess it could have been several of these steps combined, but I think it was the final steps that did it...
Okay so lets look at things that were right about the localhost, just in case you need to check this. My host file was correct. Located @ \windows\system32\drivers\ets\hosts
it contained the necessary line of:
127.0.0.1 localhost
My IIS was set up correctly, I had a Default Website running, IIS was running, the bindings on the default web site was just HTTP, port 80.
I had my Firewall OFF, I turned OFF my anti-virus. So that took those variables out of the equation.
I checked if my port 80 was open (-netsh -an), well actually, I tried to think I knew what I was looking at. Maybe subconsciously I did :)
I made sure I was not running Sql Server tools that might interfere like SSIS. (I read somewhere that may hog port 80 if on.)
I could ping my PC by computer name and ip address. (ping xxx.xxx.xxx)
I checked permissions on the physical folder, I made sure Anonymous access was on. I did not think it was an IIS problem with all these things in place.
So I finally found some post that sounded feasible, although not really directed at my problem. But, I decided to just blindly follow the instructions as a last straw. Totally risking they could be wrong...and in the end it did resolve my problem. Good thing. I was totally relieved.
In reflection, I faintly recall having to run this command before, but I forgot or someone else was running it for me. Gee thanks for babying me so I could spin my wheels for a whole day!
My steps, just as a reference, I'm not recommending anything to anyone....
turned off IIS
Open a command prompt
Type: netsh {enter}
Type: http {enter}
type: add iplisten ipaddress=xx.xx.xx.xx *message successfully added should follow
Type: exit
Open a command line
Type: ipconfig /flushdns
turned IIS back on.
browsed to http://localhost ...check...still working
browsed to http://xx.xx.xx.xx ..oh finally working
browsed to http://my-computer-name ...finally working
got on another pc, attempted remote access...finally working
okay, so I'm pretty sure adding the listener is what made this work, but maybe it was that flushing command. I was running XP before then 'upgraded' to Vista and changed my computer name so maybe it needed to clear something out. i'm not sure.
And again I don't know if this is right. All I know is that it made it work. So now I could go back to doing my original work...sending my localhost link to someone so they could look at a page design....
Hope that helps someone!
I spent way too much time on this issue, but darn it I get really annoyed when something used to work, but then does not work anymore.
My problem was getting my locally served website to be accessed either by the ip address of my computer (http://xx.xx.xx.xx) or by the computer name (http://my-computer-name-here). On my computer, or another computer on the local network. When I would try these methods I got a page not found error. The same error occurred on other pc's too, make sense, but I had to hope...
I could access my site on my pc using the http://localhost. As a note, I'm on Vista but I don't think that was my problem. As a another note, I changed many things so I guess it could have been several of these steps combined, but I think it was the final steps that did it...
Okay so lets look at things that were right about the localhost, just in case you need to check this. My host file was correct. Located @ \windows\system32\drivers\ets\hosts
it contained the necessary line of:
127.0.0.1 localhost
My IIS was set up correctly, I had a Default Website running, IIS was running, the bindings on the default web site was just HTTP, port 80.
I had my Firewall OFF, I turned OFF my anti-virus. So that took those variables out of the equation.
I checked if my port 80 was open (-netsh -an), well actually, I tried to think I knew what I was looking at. Maybe subconsciously I did :)
I made sure I was not running Sql Server tools that might interfere like SSIS. (I read somewhere that may hog port 80 if on.)
I could ping my PC by computer name and ip address. (ping xxx.xxx.xxx)
I checked permissions on the physical folder, I made sure Anonymous access was on. I did not think it was an IIS problem with all these things in place.
So I finally found some post that sounded feasible, although not really directed at my problem. But, I decided to just blindly follow the instructions as a last straw. Totally risking they could be wrong...and in the end it did resolve my problem. Good thing. I was totally relieved.
In reflection, I faintly recall having to run this command before, but I forgot or someone else was running it for me. Gee thanks for babying me so I could spin my wheels for a whole day!
My steps, just as a reference, I'm not recommending anything to anyone....
turned off IIS
Open a command prompt
Type: netsh {enter}
Type: http {enter}
type: add iplisten ipaddress=xx.xx.xx.xx *message successfully added should follow
Type: exit
Open a command line
Type: ipconfig /flushdns
turned IIS back on.
browsed to http://localhost ...check...still working
browsed to http://xx.xx.xx.xx ..oh finally working
browsed to http://my-computer-name ...finally working
got on another pc, attempted remote access...finally working
okay, so I'm pretty sure adding the listener is what made this work, but maybe it was that flushing command. I was running XP before then 'upgraded' to Vista and changed my computer name so maybe it needed to clear something out. i'm not sure.
And again I don't know if this is right. All I know is that it made it work. So now I could go back to doing my original work...sending my localhost link to someone so they could look at a page design....
Hope that helps someone!
Wednesday, February 24, 2010
Linq to Entity - Workaround Subquery?
For quite sometime, I been trying to figure out how to run a subquery in a Linq to Entity format. I’m kinda old school on querying, I really like to write straight SQL, not linq stuff. And to be honest, I'm not really sure what the right terminology is for some of it....so this post may not be the best in the world.
I’m not really even sure if this is a 'linq subquery' or not, but it is a way of getting nested data. It works for me so that is good.
To give the background on the data being retrieved, I'm getting a User that may have many User_Roster rows tied to it, each User_Roster row has a Book row tied to it, each Book row may have a User_Test row tied to it. So in this example, I need to get down to a certain a User_Test...oh my, hope that makes sense.
//method that uses linq/entity to get data from the database
public static User Get(int userId)
{
using (FakeEntities eContext = new FakeEntities ())
{
User user = eContext.User
.Include("User_Roster.Book.User_Test")
.Where(i => .ID.Equals(userId))
.FirstOrDefault();
return user;
}
}
//sample method that would be a consumer of the previous linq query
public string GetUser(string request)
{
//setup some hard coded values for examples sake
int bookId = 1234;
//call the method that used linq to query the database
User item = Get(userId);
//subquery the list to the exact roster i need
//on the second line: m is for User_Roster, any variable name works
User_Roster ur = item.User_Roster
.Where(m => m.Book.ID.Equals(bookId))
.FirstOrDefault();
//if i found the roster i need, get the data i'm looking for
if (ur != null)
{
User_Test test = ur.Book.User_Test .FirstOrDefault();
if (test != null)
....
}
}
i know there is probably a much smarter way of doing this, but I've not come across any good examples on the internet. so if this post helps you, let me know!! i'm curious.
I’m not really even sure if this is a 'linq subquery' or not, but it is a way of getting nested data. It works for me so that is good.
To give the background on the data being retrieved, I'm getting a User that may have many User_Roster rows tied to it, each User_Roster row has a Book row tied to it, each Book row may have a User_Test row tied to it. So in this example, I need to get down to a certain a User_Test...oh my, hope that makes sense.
//method that uses linq/entity to get data from the database
public static User Get(int userId)
{
using (FakeEntities eContext = new FakeEntities ())
{
User user = eContext.User
.Include("User_Roster.Book.User_Test")
.Where(i => .ID.Equals(userId))
.FirstOrDefault();
return user;
}
}
//sample method that would be a consumer of the previous linq query
public string GetUser(string request)
{
//setup some hard coded values for examples sake
int bookId = 1234;
//call the method that used linq to query the database
User item = Get(userId);
//subquery the list to the exact roster i need
//on the second line: m is for User_Roster, any variable name works
User_Roster ur = item.User_Roster
.Where(m => m.Book.ID.Equals(bookId))
.FirstOrDefault();
//if i found the roster i need, get the data i'm looking for
if (ur != null)
{
User_Test test = ur.Book.User_Test .FirstOrDefault();
if (test != null)
....
}
}
i know there is probably a much smarter way of doing this, but I've not come across any good examples on the internet. so if this post helps you, let me know!! i'm curious.
Active Reports - Fetch Data Fires Twice?
The past couple of days have been spent tracking down a problem I was having with a subreport showing blank data for the last record, even though it should have data. Whatever was the last record was, whether I had a single record or I had three records, it was all whitespace!!
Of course, I tore the report apart and nearly my hair out. I took out code that may have been tripping me up, I changed my data, I walked away from the computer. I read and re-read the Grape City support forum site. Nothing seemed to be working. But I did notice when I would debug, the FetchData would fire twice for the last record.
Nothing ever clicked with me until today. I was reading the support forum again and they warn of eArgs.EOF all over the place. (Scott, if you are reading this. I believe I saw a post from you on this!) And sure enough that was my problem. For this report, I needed to wrap the FetchData event around an eArgs.EOF check. And viola, I HAVE DATA.
private void report_FetchData(object sender, ActiveReport.FetchEventArgs eArgs)
{
if (eArgs.EOF == false)
{
...code here!
}
}
Whether this is right or wrong, it works, so I'm happy!!
Of course, I tore the report apart and nearly my hair out. I took out code that may have been tripping me up, I changed my data, I walked away from the computer. I read and re-read the Grape City support forum site. Nothing seemed to be working. But I did notice when I would debug, the FetchData would fire twice for the last record.
Nothing ever clicked with me until today. I was reading the support forum again and they warn of eArgs.EOF all over the place. (Scott, if you are reading this. I believe I saw a post from you on this!) And sure enough that was my problem. For this report, I needed to wrap the FetchData event around an eArgs.EOF check. And viola, I HAVE DATA.
private void report_FetchData(object sender, ActiveReport.FetchEventArgs eArgs)
{
if (eArgs.EOF == false)
{
...code here!
}
}
Whether this is right or wrong, it works, so I'm happy!!
Labels:
active reports,
event fires twice,
fetch data,
fetchdata
Friday, February 5, 2010
Active Reports - Single report with multiple subreport
I have created a single Active Report with a single subreport before. Not too difficult. Mine looked like this in the code view:
//create an xml datasource for the subreport
DataDynamics.ActiveReports.DataSources.XMLDataSource xmlDS = new DataDynamics.ActiveReports.DataSources.XMLDataSource();
//load data
XmlDocument xDoc = new XmlDocument();
xDoc.LoadXml(booksXml.ToString());
xmlDS.FileURL = null;
xmlDS.RecordsetPattern = "//book";
xmlDS.LoadXML(xDoc.InnerXml);
//set report datasource
BookDetail ed = new BookDetail();
this.subReport1.Report = ed;
this.subReport1.Report.DataSource = xmlDS;
Okay, so I thought when I added a second subreport with different data, I could just do the same, but I added an if statement that skipped doing anything with the subreport if the object was null.
And, THAT was a problem, it seems. When I would run the report, nothing showed up in the second subreport, even though it should have. Even when I put a breakpoint in the subreport, the Fetch method wasn't being hit - that was a clue.
Well, I guess, even if you have null data you still gotta do something with the subsequent subreport. So, I found an example on the AR support site that explained this to me - setting up a dummy object.
//create an xml datasource for the subreport
DataDynamics.ActiveReports.DataSources.XMLDataSource finalDS = new DataDynamics.ActiveReports.DataSources.XMLDataSource();
XElement BookXml = new XElement("Books");
BookXml.Add(Book.ToXml());
//load data
XmlDocument doc = new XmlDocument();
doc.LoadXml(booksXml.ToString());
finalDS.FileURL = null;
finalDS.RecordsetPattern = "//book";
finalDS.LoadXML(doc.InnerXml);
User_Book book = Book.GetItem(b.ID, m.ID);
if book != null)
{
//set subreport datasource
BookDetail fed = new BookDetail();
this.subReport2.Report = fed;
this.subReport2.Report.DataSource = finalDS;
}
else
{
BookDetail fed = new BookDetail();
fed.DetailData = false;
this.subReport2.Report = fed;
}
Hope this helps someone!
//create an xml datasource for the subreport
DataDynamics.ActiveReports.DataSources.XMLDataSource xmlDS = new DataDynamics.ActiveReports.DataSources.XMLDataSource();
//load data
XmlDocument xDoc = new XmlDocument();
xDoc.LoadXml(booksXml.ToString());
xmlDS.FileURL = null;
xmlDS.RecordsetPattern = "//book";
xmlDS.LoadXML(xDoc.InnerXml);
//set report datasource
BookDetail ed = new BookDetail();
this.subReport1.Report = ed;
this.subReport1.Report.DataSource = xmlDS;
Okay, so I thought when I added a second subreport with different data, I could just do the same, but I added an if statement that skipped doing anything with the subreport if the object was null.
And, THAT was a problem, it seems. When I would run the report, nothing showed up in the second subreport, even though it should have. Even when I put a breakpoint in the subreport, the Fetch method wasn't being hit - that was a clue.
Well, I guess, even if you have null data you still gotta do something with the subsequent subreport. So, I found an example on the AR support site that explained this to me - setting up a dummy object.
//create an xml datasource for the subreport
DataDynamics.ActiveReports.DataSources.XMLDataSource finalDS = new DataDynamics.ActiveReports.DataSources.XMLDataSource();
XElement BookXml = new XElement("Books");
BookXml.Add(Book.ToXml());
//load data
XmlDocument doc = new XmlDocument();
doc.LoadXml(booksXml.ToString());
finalDS.FileURL = null;
finalDS.RecordsetPattern = "//book";
finalDS.LoadXML(doc.InnerXml);
User_Book book = Book.GetItem(b.ID, m.ID);
if book != null)
{
//set subreport datasource
BookDetail fed = new BookDetail();
this.subReport2.Report = fed;
this.subReport2.Report.DataSource = finalDS;
}
else
{
BookDetail fed = new BookDetail();
fed.DetailData = false;
this.subReport2.Report = fed;
}
Hope this helps someone!
Monday, January 25, 2010
Active Reports - Parameter Passing
Well, it has been awhile since I've posted. I've just been cruising along with the coding, just a matter of getting it done rather than running into walls. I thought I would post a simple little post on passing parameters or data to a report.
I'm sure there is some sort of an official way of passing data to Active Reports, but I usually take a simple route when it comes to coding. Last week, I needed to pass data to a report, so I googled it. And sure enough, it was easy. I was in the mode of "this is reporting, it is limited!". Of course, Active Reports are also Classes so that was the answer to my problem!
I created private attributes that have Properties defined - getter and setter, although I only needed the setter.
so pretty much it is just like any class. here's the code snippet:
public partial class SummaryReport : DataDynamics.ActiveReports.ActiveReport
{
private bool _showPicture = true;
public bool ShowPicture
{
get { return _showPicture; }
set { _showPicture = value; }
}
.....
so to pass data outside this report, I would call:
SummaryReport rpt = new SummaryReport();
rpt.ShowPicture = false;
That's it. Pretty easy, but possibly overlooked.
}
I'm sure there is some sort of an official way of passing data to Active Reports, but I usually take a simple route when it comes to coding. Last week, I needed to pass data to a report, so I googled it. And sure enough, it was easy. I was in the mode of "this is reporting, it is limited!". Of course, Active Reports are also Classes so that was the answer to my problem!
I created private attributes that have Properties defined - getter and setter, although I only needed the setter.
so pretty much it is just like any class. here's the code snippet:
public partial class SummaryReport : DataDynamics.ActiveReports.ActiveReport
{
private bool _showPicture = true;
public bool ShowPicture
{
get { return _showPicture; }
set { _showPicture = value; }
}
.....
so to pass data outside this report, I would call:
SummaryReport rpt = new SummaryReport();
rpt.ShowPicture = false;
That's it. Pretty easy, but possibly overlooked.
}
Subscribe to:
Posts (Atom)