Server monitoring command for Laravel

Server Monitoring is a package that will periodically monitor the health of your server and website. It provides health / alarm status notifications for disk usage, an HTTP Ping function to monitor the health of external services, and a validation / expiration monitor for SSL certificates.

This package works by configuring a configuration file and then having a monitor:run artisanal order fixed on a schedule. When it runs, it will alert you via email, pushover, slack, or connected to the file system.

It currently supports the following monitors:

Disk Usage Monitors

Disk usage monitors check the percentage of storage space used on the given partition and alert if the percentage exceeds the configurable alarm percentage.

HTTP ping monitors

HTTP Ping monitors perform a simple page request and alert if the HTTP status code is not 200. They can optionally verify that a certain phrase is included in the source of the page.

SSL certificate monitors

SSL Certificate Monitors extract the SSL certificate for the configured URL and ensure that it is valid for that URL. Generic and multi-domain certificates are supported.

The monitor will alert if the certificate is invalid or expired, and also alert when the expiration date is approaching. Alert days before expiration are also configurable.

You can read more about this package on Github.


Source link

Server Density, SaaS monitoring server, raises $ 1.5 million for further expansion in US – TechCrunch

It’s been a while since we’ve heard of Server Density, the UK-based SaaS server monitoring startup, but sometimes it’s the nature of starting a business. Today, however, the largely seeded company reveals that it has raised $ 1.5 million in seed funding led by SP Ventures.

Previously, Server Density had raised € 50,000 from Seedcamp, then angel funding from Christoph Janz, Dick Williams, Qamar Aziz, who also all participated in this round.

The start-up plans to use the new capital to continue its expansion in the United States. Meanwhile, Oren Michaels, previously co-founder and CEO of Mashery, has joined its board of directors.

Founded in 2009 by school friends David Mytton and Harry Wincup, Server Density provides software as a service to help businesses run and monitor their server infrastructure. It syncs with major cloud providers to monitor websites and servers from a single console, API, and mobile app, providing the ability to diagnose issues and maintain server uptime and performance. It works both on premise and through the cloud.

The company earns money by charging a monthly fee based on the number of systems monitored. In terms of traction, Server Density claims to monitor over 300TB of data per month for its 1,000+ clients. Specifically, it says it will invest in key product areas such as improving its big data analytics feature to “leverage the billions of metrics data it collects daily.”

Co-founder David Mytton tells me the company’s clients include the UK’s National Health Service (NHS), for which he oversees the 999 emergency response systems for the country’s ambulance service. Open source CMS company Drupal also uses Server Density to monitor the servers feeding its online community, and Algolia is another customer, using SaaS to monitor its hosted search API.

He cites competitors like New Relic, which focuses on application performance rather than infrastructure, and Datadog. Numerous competitors were also acquired, including Stack Driver (Google), Pingdom and Librato (SolarWinds), CopperEgg and Boundary (BMC).

In a statement, Mytton said, “I am excited about this new cycle as it will allow us to invest in key areas while maintaining the effective and primed model that has brought us to where we are today. Raising money at this point means we can continue our own style of running the business, but with additional resources to create an even better product.


Source link

ScriptRock GuardRail, First Take: Monitoring and Diagnosing Cloud-Based Servers

There is an old DevOps story of a developer who, when a member of the operations team asks why an app isn’t running on the live servers, responds “That’s fine on my laptop” . To which the only answer is “Okay, pass it on, we’re putting your laptop into production!” It has always been a problem to ensure that development and production systems have the same configuration, especially when deploying applications and services on automated cloud platforms.

ScriptRock’s GuardRail is designed to quickly identify configuration differences on servers and workstations, comparing states between systems and over time. With GuardRail, you can see exactly what’s changed, simplifying diagnostics and reducing downtime. If this developer and admin had used GuardRail, they could have compared the laptop and the failed server to see that, perhaps, a key configuration file had been changed, or the developer had used a different version of it. ‘an application or service, then make the necessary changes without fighting for the hardware.

scripttrock1
Guardrail’s visual approach to configuration testing makes it easy to see changed and updated items.

Part of a new generation of corporate startups, ScriptRock focuses on the issues that new development and management methodologies expose. Effective monitoring is the key to a successful DevOps implementation because it gives you a source of truth that can be shared among everyone involved: developers, operations staff, and system administrators. GuardRail’s graphical dashboard can show you what’s changed day by day, across a range of different operating system platforms, and even desktop devices.

Operating as a cloud service, GuardRail uses local agents or SSH connections to capture configuration information from a server. It tracks the version numbers of running applications, services and processes, and configuration files, with the results displayed as a pie chart. You can compare different systems, as well as the current state against previous scans, with the option to view only the features that have changed.

scripttrock2
Explore for details on installed components and services.

The scanned systems are grouped into environments, so you can separate the systems by role or function. Need to monitor web servers only? You just have to group them together in an environment. You can do the same for any systems that make up your e-commerce process or that host an ERP system. Responsibility for monitoring environments can be assigned to users, with only administrators authorized to add new systems and create environments.

Monitoring for changes is only part of the GuardRail toolset. Strategies can be created from analyzes, so that you can define specific tests for an application. GuardRail provides a basic framework for designing tests, using a selection of predefined test types, for example showing that a service is running or that a specific directory exists. Test scripts can be added to policies and shared with other users, along with descriptions of possible fixes. Some script templates are extensible, allowing you to create your own custom templates.

scripttrock3
Choose from a library of models to add tests to analyzes.

Tests are run as part of scheduled system scans or can be triggered manually. In practice, most scans should be run daily, although you can choose to run a scan more often when you install a new system. If a test fails, you can email the results to the appropriate people, making sure the issues are resolved as quickly as possible.

A useful feature of GuardRail is the ability to take any saved configuration and save it for use with open configuration deployment tools such as Chef, or through automation tools such as configuration. Microsoft’s desired state. Once you save a configuration as a policy, you can export it to the automation package of your choice, ready to run your systems.

scripttrock4
Save configurations for use with configuration management tools, such as Microsoft’s DSC.

The price is competitive. Basic scans for up to five servers are free, with additional servers costing $ 3 per node. More advanced features are available in ScriptRock’s Plus and Enterprise plans, which allow you to store scan data for longer and add role-based access control.

Developers working with agile methodologies are used to a test-oriented way of working, with unit testing ensuring that everything works as expected. As operations become more flexible and we move to continuous delivery models, operations and DevOps teams need to take a similar approach. This is where GuardRail comes into its own, giving you the tools to create and build infrastructure-level unit tests – and see the results visually, making it easy to quickly explore the real problem and operate your systems. services.


Source link

Choose the right server monitoring tool for your environment

This tip is part two of two, originally published as part of “Tuning Performance and Capacity Management,” Chapter Two of the Choosing performance monitoring tools Ebook. The first part reviews the features of the performance monitoring tool.

In modern and complex data centers, performance monitoring is more important than ever, but finding the right server monitoring tool for the job can be a challenge.

Organizations rely on performance monitoring tools to ensure business productivity through application performance and availability. Your server performance monitoring tool provides important metrics on server and application performance levels, service levels, and even network issues that create bottlenecks. It also provides a centralized view of physical servers and virtual machines in the data center, as well as devices and applications, enabling IT professionals to proactively troubleshoot and improve the user experience.

But not all server monitoring tools are created equal. Some may find it difficult to measure physical and virtual environments together; some may provide too much or too granular information for data center managers to put to good use. Still others may not be cost effective or provide only simplified functionality. It’s important for IT purchasing managers to explore the range of tools and make informed decisions about which server monitor is right for their environment.

With the wide range of offerings and features, choosing the best server performance monitoring tool can be a challenge. These four steps can pave the way for the right selection:

  1. Analyze your environment. Start by analyzing your applications and the environment in which you run them. Most organizations have a large on-site footprint, so being able to monitor hardware is important. On the other hand, public cloud computing and complex applications will be part of the future (if not present) of every IT organization. Therefore, defining the requirements in light of the extended functionality options is also important. Create a list of current and future performance monitoring requirements and use it to filter potential suppliers.
  2. Define your budget. Your tool options range from free to expensive, with features and ease of use likely to improve as costs increase. Take a look at your budget and estimate how much you can invest in performance monitoring. Consider the cost of any application downtime that might occur without the right server monitoring tool. This calculation can free up additional money.
  3. Choose your preferred deployment option. In the past, Software as a Service (SaaS) performance monitoring technologies offered less functionality than their on-premises counterparts. Today, this is not necessarily true, so the choice of deployment comes down to preference. Choosing one deployment option over another will filter out certain products and simplify the decision-making process.
  4. Create a shortlist and run a pilot. It is almost a given that all products perform well in vendor tests and in demo situations, but some will perform differently in the real data center. Test your top three choices. The tools will be difficult to fully deploy; instead, create a scaled-down application and apply each of your preselected performance monitoring tools to it. This will help you understand how they work and, critically, compare the features and usability of the monitoring tools.

Potential challenges

As you go through the selection process, there are a few ‘issues’ you should consider:

  • Perpetuate your selection. Most companies will use public cloud computing, so any product you are considering should fully support this transition, even if it is not on your short-term roadmap. Take into account the infrastructural environment of today and tomorrow.
  • Understand what you are paying for. With so much competition in the performance monitoring space, vendors can come up with an introductory feature set at rock-bottom prices, retaining important features as part of expensive add-ons. If you expect the full suite of products, don’t budget for a base product with extended functionality only available with an additional purchase. For a SaaS server monitoring tool, understand the factors that affect pricing: the number of applications monitored, the number of registered user accounts, the number of connected components, etc.
  • Prepare for employee training. Even if a vendor strives to make their product easy to use, the reality is that complex applications require more complex monitoring tools. Your staff will face a learning curve with which performance monitoring product you choose. Invest in training and you will reap the benefits of monitoring as quickly as possible.

About the Author:
Bernard Golden is the former CEO of HyperStratus, a cloud computing consulting firm. He is also the author of four books on virtualization and cloud computing. Golden is a highly regarded speaker and lectures around the world.


Get deeper into real-time performance monitoring and management



Source link

Zoho Expands Enterprise Cloud Services with New Server Monitoring App for iPhone

Site24x7 server monitoring tool provides easy remote monitoring for IT pros with iPhone

Site24x7 announced this week its new iPhone application. Site24x7 is a robust enterprise server monitoring solution from the Zoho Group, which is best known among iOS users for its Zoho Docs productivity suite.

Site24x7 offers a range of enterprise features for web servers that host critical interactive web applications and cloud services, as well as other mission critical services such as internal and external DNS services and mail services. In addition to simple server problem reports and alerts, the Site24x7 can be used to tune servers for optimal performance and availability.

The great features of the 24 × 7 site include the following tools.

  • Monitoring website availability
  • Performance monitoring and statistics
  • Track performance issues associated with web applications that use a range of forms, modules, and services
  • Monitor DNS services (internal and external) for server performance and health
  • Ensure adherence to service level agreements (SLAs) regarding response times to customer issues
  • Monitoring and tuning of messaging service performance (including the ability to calculate message delivery times)
  • Monitoring of additional servers and workstations hosting or processing critical data
  • Global surveillance from over 40 geographic locations around the world
  • Instant notifications and details of issues and potential issues
  • Detailed reports based on tracking and activity

The new iPhone app puts many of these capabilities in the hands of IT specialists and enables in-depth monitoring, detailed server status and history reports, root cause analysis of server issues (using (standard troubleshooting tools such as DNS scan, Traceroute, ping scan and web page screenshots at time of error) and server alert push notifications.

The new Site24x7 app joins several other iOS apps that Zoho offers to integrate with its various cloud services, including the company’s best-known solution, Zoho Docs, which offers a cloud-based office suite that supports storage. local documents on an iPhone or iPad for offline viewing / editing. The company also offers a range of enterprise cloud services and can provide all the essential features a business needs: document editing, project management, invoicing and accounting, customer relationship management (CRM ) and human resource management are just a few examples of the cloud. solutions offered by Zoho.

Source: Site24x7
Via: Computer briefcase

Image: Site24x7


Source link

Prioritize alerts with server monitoring tools

Today’s servers are equipped with a dizzying array of sensors and can produce an incredible variety of alerts. However, an important lesson administrators learn early on is that alerts are not created equal – not all alerts generated by server monitoring tools are actually important. If the servers are configured to notify you every time an alert is triggered, you will receive so many pop-up notifications that really important alerts could go unnoticed. This tip will help administrators determine which alerts are really important and how they want server monitoring tools to notify them of those alerts.

A note on setting up and configuring alerts
Before I begin, I want to point out that there really is no right or wrong way to configure alerts. The recommendations in this tip are based on my two decades of computer experience, but ultimately it comes down to personal preferences. While I hope you find my recommendations useful, each administrator should configure server alerts in a way that meets the unique requirements of their own organization.

The other thing to note is that there are many different ways an administrator can generate alerts. Some servers can generate alerts at the hardware level. These capabilities can be useful, but they are far from the only alert mechanism available. Server vendor server monitoring tools can provide a wealth of information, as can operating system level server monitoring tools, such as Microsoft’s System Center Operations Manager. Because there are many different options for server monitoring and alerting, I will take a generalized approach to the topic rather than focusing on specific server monitoring tools.

Prioritize server alerts
The key to effective server monitoring is to prioritize the alerts generated by server monitoring tools. I recommend classifying each type of alert as high, medium, or low priority.

I like to treat high priority alerts like anything that is absolutely critical. For example, running out of disk space on a server would be a critical event. The failure of a clustered application server would also be a critical event.

Medium priority alerts are a bit more difficult to define. The events that I consider medium priority would probably be defined as high priority by some organizations. I tend to treat an event as medium priority if the condition that caused the alert is not actually causing an outage. For example, if a node in a cluster goes offline for some unknown reason, but the cluster as a whole continues to operate, I would consider this a medium priority. Of course, this has a lot to do with the type of environment I work in. I have worked for large companies that would treat a cluster node failure as a critical event.

If you happen to work for an organization that does not tolerate downtime, it might be a good idea to configure these types of alerts based on whether or not there is a potential single point of failure. For example, suppose you have a RAID array that can handle the failure of two drives without disconnecting. If only one drive in the array fails, you can treat the event as a medium priority alert because the array can still tolerate another drive failure without data loss. However, if two drives failed, you might consider this a high priority, as failure of one additional drive would cause the entire array to fail.

While I tend to think of this as a great way to prioritize alerts, it is much more difficult to configure alerts based on the number of components that have failed than to simply trigger an alert when a failure occurs. . Depending on the type of monitoring you are performing and the features available in your particular monitoring software, setting up this type of alert may not even be an option.

Configuring the alert mechanism
Once you have determined how the different types of alerts should be classified, you will need to decide how you want to be notified of alerts. My personal preference is for the server monitoring tools to send high priority alerts to my cell phone via text message. I have my cell phone with me most of the time, so sending critical alerts to my phone is the best way to make sure I get the alert as quickly as possible.

Since medium priority alerts are important, but not absolutely critical, I prefer to send these alerts to my email. As you can see in Figure A, Windows Server has native email alerting capabilities, which means you can easily send email alerts based on any event that may occur in the system. ‘exploitation.

Figure A

Windows is able to natively send alerts by e-mail.

I tend to check my emails several times a day, which means that an alert sent to my email won’t go unnoticed, but I probably won’t see it as quickly as if the alert was sent to my phone. portable. This is an important distinction, because the last thing I want to be bothered about is a non-critical server alert if I’m going out with friends on the weekends. Of course, this is just one example of how alerts can be sent. Many other options exist. For example, a company named Server Density offers an iPhone server monitoring app with full alert support.

Clearly, the subject of what constitutes a priority alert is certainly open to debate. One more thing to consider, however, is that high priority alerts may not always be related to system failures. For example, most servers can trigger an alert whenever the system enclosure is opened. If no one is supposed to open server enclosures other than you, then an enclosure alarm could very well be a high priority alert. Likewise, an over temperature alert can also be considered a high priority because if the server gets too hot it will eventually cause a shutdown.

About the Author: Brien Posey is a seven-time Microsoft MVP with two decades of IT experience. During this time, he published several thousand articles and wrote or contributed to dozens of computer books. Prior to becoming a freelance writer, Posey was CIO for a national chain of hospitals and healthcare facilities. He has also worked as a network administrator for some of the largest insurance companies in the country and for the Department of Defense at Fort Knox.


Source link

Installing Nagios on Solaris for Network and Server Monitoring

I have been using Nagios, an enterprise class server and network monitoring system, for almost 10 years now and have …

yet to find another free and open source surveillance system that can beat it.

This article will walk you through setting up a basic Nagios installation on a Solaris 10 system. For this example, I’m using Solaris 10 Update 6 (released October 2008) running in 32-bit mode on a VMware virtual machine. The hostname is “sol10vm”, but it will be different in your configuration. The alternate versions of Solaris and Apache Web Server should work fine; I have run Nagios on everything from Red Hat 7.3 to Mac OS X.

Nagios installation prerequisites
This tutorial assumes that you have installed the GNU and GNU make compiler collection that is on the Solaris 10 installation disc, and that the compiler is working correctly. In most cases, it’s just a matter of adding /usr/sfw/bin to your path environment variable. If you run “gcc” and “gmake” and get the following output, you are probably good to go.

[email protected]:/> gcc
gcc: no input files

[email protected]:/> gmake
gmake: *** No targets specified and no makefile found.  Stop.

For this demonstration, I am using the Apache Web Server packages provided by Steve Christensen’s SunFreeware project, specifically Apache 2.0.59 and its dependencies. These packages will be installed under /usr/local, so make sure that /usr/local/bin is on your way and that /usr/local/lib and /usr/local/ssl/lib can be found in your system library search path (use “crle;” see “man crle” for details).

Once you have edited the file /usr/local/apache2/conf/httpd.conf and started the web server with /usr/local/apache2/bin/apachectl start, use your web browser to go to http: // your hostname. It should look something like this:

Click to enlarge.

Downloading, compiling and installing Nagios
The first step in installing Nagios is to create a Nagios user and group. The following commands show how to do this on a freshly installed Solaris system. In your case, the user ID might not be 100, but I recommend that the Nagios group ID is the same as the “nagios” user ID.

[email protected]:/> useradd -c "nagios user" -d /usr/local/nagios nagios
[email protected]:/> grep nagios /etc/passwd
nagios:x:100:1::/home/nagios:/bin/sh
[email protected]:/> groupadd -g 100 nagios
[email protected]:/> grep nagios /etc/group
nagios::100:
[email protected]:/> usermod -g nagios nagios

As of January 2008, the latest version of Nagios is 3.0.6 and Nagios plug-ins are version 1.4.13. You can get both from the Nagios downloads page. Download and extract both archives to a location of your choice. I prefer /usr/local/src.

I prefer to keep my Nagios installation in its own directory, so we’ll pass an argument to the configure script telling it to install everything in /usr/local/nagios.

[email protected]:/usr/local/src/nagios-3.0.6> ./configure --prefix=/usr/local/nagios

Once the configuration process is complete without error, type “gmake all” to compile the basic Nagios software and web CGIs. Then type “gmake install” to install everything. Once the installation is complete, run “gmake install-init” then “gmake install-config” to install sample configuration files and allow Nagios to start at system startup.

Once Nagios itself has been compiled and installed, the next step is to repeat the process with the Nagios plugins, which provide enhanced system and service checks. After unzipping the source code archive, the configuration step is the same:

[email protected]:/usr/local/src/nagios-plugins-1.4.13> ./configure --prefix=/usr/local/nagios

Once the configuration script is complete, run “gmake” and “gmake install” to install the plug-ins in the directory that was created when you installed the base Nagios package. In addition, you must add /usr/local/nagios/lib to your system library search path using the “crle” command as you did with /usr/local/lib. If this step is omitted, it will cause errors with some of the plugins.

Configuring Apache for Nagios
For this example we will not configure Nagios for HTTP user authentication. This makes the tutorial easier, but it should not be used in a production environment. Once you have gone through this tutorial and understood how things are set up, read the official Nagios documentation and change your configuration to implement user authentication.

To configure Apache for use with Nagios, add the following code to your Apache configuration file. In this case, the file is located in /usr/local/apache2/conf/httpd.conf.


ScriptAlias /nagios/cgi-bin /usr/local/nagios/sbin Alias /nagios /usr/local/nagios/share 
    Options ExecCGI
    AllowOverride None
    Order allow,deny
    Allow from all


    Options None
    AllowOverride None
    Order allow,deny
    Allow from all

Once Apache is configured for Nagios, restart the web server with /usr/local/apache2/bin/apachectl graceful, or by running /usr/local/apache2/bin/apachectl stop followed by /usr/local/apache2/bin/apachectl start.

Even though Nagios is not yet fully configured and started, you should be able to navigate to http: // yourhostname / nagios in a web browser and see a screen like this:


Click to enlarge.

Nagios configuration
Nagios has a number of configuration files, located in both /usr/local/nagios/etc and /usr/local/nagios/etc/objects.

The first file we need to edit is /usr/local/nagios/etc/cgi.cfg. In this file, change the value of “use_authentication” to 0. For production use, you will want to re-enable it after reading the documentation on HTTP user authentication.

The second file to edit is /usr/local/nagios/etc/nagios.cfg. In this file, change both “check_external_commands” and “use_syslog” to 0. This prevents someone from running external commands on your Nagios installation when user authentication is not in effect and prevents Nagios spam your syslog.

The default “contact group” configuration for Nagios is correct in this basic example. Edit /usr/local/nagios/etc/objects/contacts.cfg and change “[email protected]”to your e-mail address under the contact definition” nagiosadmin. “For e-mail alerts to work, you need a working mail server or mail relay on your Solaris system (this configuration is beyond the scope of this article).

You will see in contacts.cfg that the contact definition says to use the generic contact template. This model is defined in /usr/local/nagios/etc/objects/templates.cfg, and also refers to periods in /usr/local/nagios/etc/objects/timeperiods.cfg. In most cases, you’ll want to leave these definitions alone, but they’re highly customizable and allow for multiple contacting across multiple shifts, or for contacting different people depending on what time of day a problem arises.

If you run the command gmake install-config sooner after compiling and installing Nagios, there is already a localhost.cfg file in place to check various services on the local machine where Nagios is running. You can safely ignore the “linux-server” references in this file; the author assumes that it will run on a Linux system. We want to reduce them to network connectivity, web server, and SSH daemon checks. Comment out the entries in this file for the Root_Partition, Current Users, Total Processes, Current Load, and Swap Usage services. This will only leave the service definitions for “check_http”, “check_ssh” and “PING” uncommented. The commands used for service checks are defined in the commands.cfg file. You can add your own by editing the file, and then use them in your service definitions.

The printer.cfg, switch.cfg, and windows.cfg files contain more examples of how to monitor printers, switches, and Windows systems using some of Nagios’ advanced plug-ins. We won’t be using these files in this tutorial, but they’re worth reading to get a feel for how the different pieces of the Nagios puzzle fit together.

Once the configuration files have been changed to your satisfaction, it’s time to run Nagios to check your configuration files and make sure nothing has been overlooked. To do this, run /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg. If everything checks out, the output will look like this:


Click to enlarge.

If the Nagios configuration check fails, it will tell you what problems it found. Go back and check your config files, then run the checking process again until it says, “Things look good. ”

Make Nagios work
Once you’ve got your setup right, it’s time to launch Nagios. If you ran “gmake install-init” earlier, a script has already been created in /etc/init.d it will all start correctly for you. To run /etc/init.d/nagios start to start the process. Once it’s running, you should be able to navigate to http: // yourhostname / nagios with a web browser and click Tactical Overview to see an overall status. In this screenshot, you can see that the only monitored host is OK, as well as all three services on that host. Notifications for two of these services are disabled.


Click to enlarge.

By clicking on Service Detail, you will get a detailed status report on all the individual services monitored, along with the result of their last check:


Click to enlarge.

Host Detail does exactly that – it shows a detailed status display with one line for each monitored host:


Click to enlarge.

The links to Host Group Overview and Host Group Summary will show similar status views for each host group (as defined in the configuration files). Since we only have one host (and one host group) in this quick tutorial, there is no need to view screenshots.

By default, Nagios will check each host and service every five minutes. If something goes down, the web view of that host or service will change from green to red and an email notification will be sent to contact groups (and by extension, contacts) defined in the host template via templates.cfg. Once the host or service resumes normal operation, email alerts will be sent to the defined contacts.

Further reading on Nagios
This tutorial barely scratches the surface of the features of the Nagios enterprise monitoring system and shows only the most basic features. The Nagios online documentation goes into more detail, and a number of good books have been published on the subject. I recommend these titles:

Hopefully, this basic tutorial will get you started using Nagios for all of your network and server monitoring needs.

ABOUT THE AUTHOR: Bill Bradford is the creator and maintainer of SunHELP and lives in Houston, Texas with his wife Amy.

Did you find this useful? Write to Matt Stansberry regarding your concerns about the data center at [email protected].


Source link