While it is hard errand to pick solid certification questions/answers assets regarding review, reputation and validity since individuals get sham because of picking incorrectly benefit. Killexams.com ensure to serve its customers best to its assets as for exam dumps update and validity. The greater part of other's sham report objection customers come to us for the brain dumps and pass their exams cheerfully and effortlessly. We never bargain on our review, reputation and quality because killexams review, killexams reputation and killexams customer certainty is imperative to us. Extraordinarily we deal with killexams.com review, killexams.com reputation, killexams.com sham report grievance, killexams.com trust, killexams.com validity, killexams.com report and killexams.com scam. On the off chance that you see any false report posted by our rivals with the name killexams sham report grievance web, killexams.com sham report, killexams.com scam, killexams.com protestation or something like this, simply remember there are constantly terrible individuals harming reputation of good administrations because of their advantages. There are a great many fulfilled clients that pass their exams utilizing killexams.com brain dumps, killexams PDF questions, killexams rehearse questions, killexams exam simulator. Visit Killexams.com, our example questions and test brain dumps, our exam simulator and you will realize that killexams.com is the best brain dumps site.
642-584 Practice test | C9010-250 real questions | 000-M39 braindumps | 000-273 questions answers | C2090-317 real questions | 000-342 dumps questions | 210-451 dump | PMI-RMP braindumps | 642-542 braindumps | 9L0-008 practice questions | C2010-024 real questions | 000-426 practice test | SK0-003 questions and answers | COG-615 free pdf download | 000-565 practice questions | 000-623 questions and answers | 000-R25 VCE | C2090-305 brain dumps | AZ-200 mock exam | C2090-461 cram |
Real C5050-380 questions that appeared in test today
On the off chance that would you say you are loaded how to pass your IBM C5050-380 Exam? With the assistance of the affirmed killexams.com IBM C5050-380 Testing Engine you will figure out how to blast your abilties. Most of the researchers begin distinguishing when they find that they need to appear in IT confirmation. Our brain dumps are finished and to the point. The IBM C5050-380 PDF records make your innovative and perceptive expansive and help you parcels in guidance of the accreditation exam.
Are you searching for Pass4sure IBM C5050-380 Dumps containing real exam Questions and Answers for the IBM Cloud Platform Solution Architect v2 test prep? we offer most updated and quality supply of C5050-380 Dumps that's http://killexams.com/pass4sure/exam-detail/C5050-380. we have got compiled an information of C5050-380 Dumps questions from actual tests so as to allow you to prepare and pass C5050-380 exam on the first attempt.
killexams.com Discount Coupons and Promo Codes are as under;
WC2017 : 60% Discount Coupon for all exams on website
PROF17 : 10% Discount Coupon for Orders larger than $69
DEAL17 : 15% Discount Coupon for Orders larger than $99
SEPSPECIAL : 10% Special Discount Coupon for All Orders
You ought to get the recently updated IBM C5050-380 Braindumps with the particular answers, that are ready via killexams.com specialists, permitting the candidates to understand experience regarding their C5050-380 exam path within the most, you will realize C5050-380 exam of such nice quality is not available anywhere within the marketplace. Our IBM C5050-380 brain Dumps are given to candidates at acting 100% of their test. Our IBM C5050-380 exam dumps are within the marketplace, providing you with an opportunity to place along in your C5050-380 exam within the right manner.
At killexams.com, we give surveyed IBM C5050-380 tutoring assets which can be the best to pass C5050-380 test, and to get authorized by IBM. It is an extraordinary inclination to quicken your vocation as an expert in the Information Technology undertaking. We are content with our notoriety of supporting individuals pass the C5050-380 exam of their first attempts. Our prosperity costs in the previous years had been actually amazing, on account of our happy customers currently ready to help their profession inside the rapid path. killexams.com is the essential decision among IT experts, particularly the individuals looking to move up the chain of command goes speedier in their separate partnerships. IBM is the venture pioneer in records age, and getting ensured by them is a guaranteed approach to win with IT professions. We enable you to do precisely that with our inordinate lovely IBM C5050-380 tutoring materials.
IBM C5050-380 is ubiquitous all around the globe, and the business undertaking and programming arrangements given by utilizing them are grasped by method for about the greater part of the associations. They have helped in driving bunches of offices on the beyond any doubt shot course of pass. Extensive data of IBM items are taken into preparation a totally essential capability, and the specialists certified by method for them are very esteemed in all associations.
We offer real C5050-380 pdf exam questions and answers braindumps in groups. Download PDF and Practice Tests. Pass IBM C5050-380 digital book Exam rapidly and effectively. The C5050-380 braindumps PDF compose is to be had for perusing and printing. You can print more prominent and exercise regularly. Our pass rate is high to 98.9% and the comparability percent between our C5050-380 syllabus ponder manual and actual exam is 90% construct absolutely with respect to our seven-yr instructing background. Do you need accomplishments inside the C5050-380 exam in only one attempt? I am as of now breaking down for the IBM C5050-380 real exam.
As the only thing in any way important here is passing the C5050-380 - IBM Cloud Platform Solution Architect v2 exam. As all which you require is a high score of IBM C5050-380 exam. The best one viewpoint you have to do is downloading braindumps of C5050-380 exam courses now. We will never again will give you a chance to down with our cash back guarantee. The specialists also protect rhythm with the greatest progressive exam so you can give the a great many people of updated materials. Three months free get section to as an approach to them through the date of purchase. Each applicant may likewise bear the cost of the C5050-380 exam dumps through killexams.com at a low cost. Regularly there might be a decrease for all individuals all.
Within the sight of the legitimate exam substance of the brain dumps at killexams.com you may effectively extend your specialty. For the IT experts, it's far critical to adjust their aptitudes predictable with their calling prerequisite. We make it smooth for our clients to take accreditation exam with the assistance of killexams.com demonstrated and certified exam material. For a splendid future in its realm, our brain dumps are the top notch decision.
killexams.com Huge Discount Coupons and Promo Codes are as under;
WC2017: 60% Discount Coupon for all exams on website
PROF17: 10% Discount Coupon for Orders greater than $69
DEAL17: 15% Discount Coupon for Orders greater than $99
DECSPECIAL: 10% Special Discount Coupon for All Orders
A best dumps composing is an absolutely fundamental element that makes it simple a decent method to take IBM certifications. Be that as it may, C5050-380 braindumps PDF gives accommodation for applicants. The IT accreditation is a significant troublesome task if one does now not find right direction inside the type of honest to goodness valuable asset material. Subsequently, we've genuine and up and coming substance material for the instruction of accreditation exam.
Killexams LOT-802 braindumps | Killexams 132-S-815-1 study guide | Killexams 9L0-064 real questions | Killexams EE0-502 practice test | Killexams CCA-500 real questions | Killexams C4090-451 free pdf | Killexams 70-339 pdf download | Killexams 1Z0-853 free pdf | Killexams HP2-E37 free pdf download | Killexams MA0-103 exam questions | Killexams 1Z0-265 test prep | Killexams C4090-452 bootcamp | Killexams C4090-971 braindumps | Killexams 771-101 dumps questions | Killexams 156-215-71 VCE | Killexams PW0-270 test questions | Killexams HP0-J54 test prep | Killexams 000-745 study guide | Killexams S10-110 study guide | Killexams 250-622 exam prep |
Killexams ST0-192 questions and answers | Killexams SC0-451 dumps | Killexams HP0-052 dump | Killexams E20-559 examcollection | Killexams 200-105 sample test | Killexams 000-253 test questions | Killexams 000-623 free pdf download | Killexams HP0-087 test prep | Killexams 3X0-201 dumps questions | Killexams 00M-502 practice test | Killexams A00-206 exam questions | Killexams 3300-1 real questions | Killexams 000-314 free pdf | Killexams 000-896 practice test | Killexams M9520-233 exam prep | Killexams 648-244 practice questions | Killexams AX0-100 mock exam | Killexams M2140-726 cram | Killexams 000-R13 practice questions | Killexams HP2-B117 test prep |
IBM Cloud Platform Solution Architect v2
Pass 4 sure C5050-380 dumps | Killexams.com C5050-380 real questions | [HOSTED-SITE]
November 15, 2016 06:00 ET | Source: The Apache Software Foundation
Forest Hill, MD, Nov. 15, 2016 (GLOBE NEWSWIRE) -- The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today the availability of Apache® jclouds™ v2.0, the Java multi-cloud toolkit.
Apache jclouds gives users the freedom to create applications that are portable across clouds while providing full control to use cloud-specific features. As a cloud-agnostic library, jclouds enables developers to access a variety of supported cloud providers using one API.
"Apache jclouds 2.0 represents a significant milestone for the project" said Ignasi Barrera, Vice President of Apache jclouds. "We are proud to support all major cloud providers in the marketplace with a mature, stable, codebase that is ready for production."
jclouds entered the Apache Incubator in April 2013 and graduated as a Top-Level Project (TLP) in October of the same year. The Apache jclouds 2.0 release is the 11th release as a TLP and the consolidation of the project, with more than 13K commits (1K in the last release) made by more than 250 contributors (35 new last year).
Under The HoodApache jclouds 2.0 features include:
Wider compatibility with the Guava and Guice libraries.
Configuration of arbitrary hardware values in the compute abstraction.
Support for new cloud providers such as Microsoft Azure Resource Manager, ProfitBricks v3, OneAndOne and Backblaze B2.
Better integration with OSGi and Apache Karaf.
Numerous bug fixes and performance improvements.
Apache jclouds is used by Abiquo, Adobe, CloudBees, Cloudify, Cloudsoft, Mesosphere, and RedHat, among many others. In addition, jclouds is supported by cloud companies and communities such as Amazon Web Services, Backblaze B2, Apache CloudStack, Docker, Google Cloud Platform, Microsoft Azure, OpenStack, Rackspace, and many more.
"Abiquo, as the creator of the leading commercial cloud management platform, relies on jclouds to give our customers the agility they need," said Ian Finlay, CEO of Abiquo. "Using jclouds and our plugin architecture we have been able to deliver support for new cloud providers and features demanded by our customers in days or weeks rather than months. This approach ensures that our service provider and enterprise customers can bring together the cloud providers they need to deliver a great hybrid cloud solution to their customers."
"Apache jclouds provides Cloudsoft AMP with a layer of abstraction across clouds allowing Cloudsoft AMP to model, deploy and manage applications in line with our customers’ requirements," said Duncan Johnston-Watt, Founder and CEO of Cloudsoft. "Many of our customers have multi-cloud application strategies and need rapid support for target locations as these emerge. I’m pleased to see support Microsoft Azure Resource Manager and improvements for OpenStack (Mitaka) and IBM cloud targets."
"We are very proud of our achievement and welcome contributions that help grow the Apache jclouds community, including joining our mailing lists and submitting feedback, use cases, bug reports, patches, and documentation," added Barrera.
Availability and OversightApache jclouds software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, release notes, documentation, and more information on Apache jclouds, visit http://jclouds.apache.org/ and https://twitter.com/jclouds
Get Involved!Apache jclouds welcomes contribution and community participation through mailing lists, an IRC channel, as well as attending face-to-face MeetUps, developer trainings, and user events. Those wishing to get involved in the project can find out more at http://jclouds.apache.org/community/
About The Apache Software Foundation (ASF)Established in 1999, the all-volunteer Foundation oversees more than 350 leading Open Source projects, including Apache HTTP Server --the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 550 individual Members and 5,300 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(c)(3) charitable organization, funded by individual donations and corporate sponsors including Alibaba Cloud Computing, ARM, Bloomberg, Budget Direct, Capital One, Cerner, Cloudera, Comcast, Confluent, Facebook, Google, Hortonworks, HP, Huawei, IBM, InMotion Hosting, iSigma, LeaseWeb, Microsoft, OPDi, PhoenixNAP, Pivotal, Private Internet Access, Produban, Red Hat, Serenata Flowers, WANdisco, and Yahoo. For more information, visit http://www.apache.org/ and https://twitter.com/TheASF
© The Apache Software Foundation. "Apache", "jclouds", "Apache jclouds", and "ApacheCon" are registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. All other brands and trademarks are the property of their respective owners.
# # #
The Apache Software Foundation
+1 617 921 8656
All this means that with PCF, your build artifact is your native deployment artifact, while in Kubernetes your build artifact is a docker image. With Kubernetes, you need to define the template for this docker image yourself in a Dockerfile, while in PCF you get this template automatically from a buildpack.
PCF separates the web dashboard to two separate target audiences.
Ops Manager is targeted at the IT professional that is responsible for setting up the virtual machines or hardware that will be used to create the PCF cluster.
Apps Manager is targeted at the developer that is responsible for pushing application code to testing or production environments. The developer is completely unaware of the underlying infrastructure that runs its PCF cluster. All he can really see is the quotas assigned to his organization, such as memory limits.
Kubernetes takes a different approach. You get one dashboard to manage everything. Here’s a typical Kubernetes dashboard:
As you can see from the left-hand side, there is a lot of data to process here. You have access to persistent volumes, daemons, definition of roles, replication controllers etc. It’s hard to focus on what are the developer’s needs and what are the IT needs. Some might tell you this is the same person in a Devops culture, and that’s a fair point. Still, in reality-it is a more confusing paradigm compared to a simple application manager.
Command Line Interface
Cloud foundry uses a command line interface called cf. It is a cli that lets you control all aspects of the developer interaction. Following in the footsteps of simplicity that you might have already noticed, the idea is to take an opinionated view to practically everything.
For example, if you are in a folder that contains a spring boot jar file called myapp.jar, you can deploy this application to PCF with the following command:
cf push myapp -p myapp.jar
That’s it! That’s all you need. PCF will lookup the current working directory and find the jar executable. It will then update bits to the platform, where the java buildpack would create a container, calculate the required memory settings, deploy it to the currently logged-in org and space in PCF, and set a route based on the application name:
wabelhlp0655019:test odedia$ cf push myapp -p myapp.jar
Updating app myapp in org OdedShopen / space production as user…
Uploading app files from: /var/folders/_9/wrmt9t3915lczl7rf5spppl597l2l9/T/unzipped-app271943002
Uploading 977.4K, 148 files
Starting app myapp in org OdedShopen / space production as user…Downloading pcc_php_buildpack…Downloading binary_buildpack…Downloading python_buildpack…Downloading staticfile_buildpack…Downloading java_buildpack…Downloaded binary_buildpack (61.6K)Downloading ruby_buildpack…Downloaded ruby_buildpackDownloading nodejs_buildpack…Downloaded pcc_php_buildpack (951.7K)Downloading go_buildpack…Downloaded staticfile_buildpack (7.7M)Downloading ibm-websphere-liberty-buildpack…Downloaded nodejs_buildpack (111.6M)Downloaded ibm-websphere-liberty-buildpack (178.4M)Downloaded java_buildpack (224.8M)Downloading php_buildpack…Downloading dotnet_core_buildpack…Downloaded python_buildpack (341.6M)Downloaded go_buildpack (415.1M)Downloaded php_buildpack (341.7M)Downloaded dotnet_core_buildpack (919.8M)Creating containerSuccessfully created containerDownloading app package…Downloaded app package (40.7M)Staging…— — -> Java Buildpack Version: v3.18 |https://github.com/cloudfoundry/java-buildpack.git#841ecb2— — -> Downloading Open Jdk JRE 1.8.0_131 from https://java-buildpack.cloudfoundry.org/openjdk/trusty/x86_64/openjdk-1.8.0_131.tar.gz (found in cache)Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.1s)— — -> Downloading Open JDK Like Memory Calculator 2.0.2_RELEASE from https://java-buildpack.cloudfoundry.org/memory-calculator/trusty/x86_64/memory-calculator-2.0.2_RELEASE.tar.gz (found in cache)Memory Settings: -Xmx681574K -XX:MaxMetaspaceSize=104857K -Xss349K -Xms681574K -XX:MetaspaceSize=104857K
— — -> Downloading Container Security Provider 1.5.0_RELEASE from https://java-buildpack.cloudfoundry.org/container-security-provider/container-security-provider-1.5.0_RELEASE.jar (found in cache)
— — -> Downloading Spring Auto Reconfiguration 1.11.0_RELEASE from https://java-buildpack.cloudfoundry.org/auto-reconfiguration/auto-reconfiguration-1.11.0_RELEASE.jar (found in cache)
Exit status 0Uploading droplet, build artifacts cache…Uploading build artifacts cache…Uploading droplet…Staging completeUploaded build artifacts cache (109B)Uploaded droplet (86.2M)Uploading completeDestroying containerSuccessfully destroyed container0 of 1 instances running, 1 starting0 of 1 instances running, 1 starting0 of 1 instances running, 1 starting0 of 1 instances running, 1 starting1 of 1 instances running
Although you can start with barely any intervention, this doesn’t mean you give up any control. You have a lot of customizations available in PCF. You can define your own routes, set the number of instances, max memory and disk space, environment variables etc. All of this can be done in the cf cli or by having a manifest.yml file available as a parameter to the cf push command. A typical manifest.yml file can be as simple as the following:
applications:- name: my-appmemory: 512Minstances: 2env:PARAM1: PARAM1VALUEPARAM2: PARAM2VALUE
The main takeaway is this: with PCF, provide the information you know, and the platform will imply the rest. Cloud Foundry’s haiku is:
Here’s my code
Run it on the cloud for me.
I don’t care how.
In kubernetes, you interact with the kubectl cli. The commands are not complicated at all, but there is still a higher learning curve from what I’ve experienced so far.
For starters, a basic assumption is that you have a private docker registry available and configured (unless you only plan to deploy images available on public registries such as docker hub). Once you have that registry up and running, you will need to push your Docker image to that registry.
Now that the registry contains your image, you can initiate commands to kubectl to deploy the image. Kubernetes documentaiotn gives the example of starting up an nginx server:
# start the pod running nginx$ kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"deployment "nginx-app" created
The above command only spins up a kubernetes pod and runs the container.
A pod is an abstraction that groups one or more containers to the same network ip and storage. It’s actually the smallest deployable unit available in Kubernetes. You can’t access a docker container directly, you only access its pod. Usually, a pod would contain a single docker container, but you can run more. For example, an application container might want to have some monitoring dameon container in the same pod.
In order to make the container accessible to other pods in the Kubernetes cluster, you need to wrap the pod with a service:
# expose a port through with a service$ kubectl expose deployment nginx-app --port=80 --name=nginx-httpservice "nginx-http" exposed
Your container is now accessible inside the kubernetes cluster, but it is still not exposed to the outside world. For that, you need to wrap your service with an ingress.
Note: Ingress is still considered a beta feature!
I could not find a simple command to expose an ingress at this point (please correct me if I’m wrong!). It appears that you must create an ingress descriptor file first, for example:
apiVersion: extensions/v1beta1kind: Ingressmetadata:name: test-ingressannotations:ingress.kubernetes.io/rewrite-target: /spec:rules:- http:paths:- path: /testpathbackend:serviceName: testservicePort: 80
Once that file is available, you can create the ingress by issuing a command
kubectl create -f my-ingress.yaml
Note that unlike the single manifest.yml in PCF, the deployment yml files in Kubernetes are separated — there is one for pod creation, one for service creation and as you saw above — one for ingress creation. A typical descriptor file is not entirely overwhelming but I wouldn’t call it the most user friendly either. For example, here’s a descriptor file for nginx deployment:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2kind: Deploymentmetadata:name: nginx-deploymentlabels:app: nginxspec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.7.9ports:- containerPort: 80
All this to say — with kubernetes, you need to be specific. Don’t expect deployments to be implied. If I had to create a haiku for Kubernetes, it’ll be probably something like this:
Here’s my code
I’ll tell you exactly how you should run it on the cloud for me
And don’t you dare make any assumptions on the delployment without my written consent!
Zero Downtime Deployments
Both platforms support the ability to deploy applications with zero downtime, however this is one area where Kubernetes wins in my opinion, since it provides a built-in mechanism for zero downtime deployments with rollback.
2019 update: As of Pivotal Cloud Foundry 2.4, native zero-downtime deployments are available out of the box!
With Pivotal Cloud Foundry, t̶h̶e̶r̶e̶’̶s̶ ̶n̶o̶ ̶b̶u̶i̶l̶t̶-̶i̶n̶ ̶m̶e̶c̶h̶a̶n̶i̶s̶m̶ ̶t̶o̶ ̶s̶u̶p̶p̶o̶r̶t̶ ̶a̶ ̶r̶o̶l̶l̶i̶n̶g̶ ̶u̶p̶d̶a̶t̶e̶, you’re basically expected to do some cf cli trickery to perform the update with zero downtime. The concept is called blue-green deployment. If I had to explain it in step-by-step guide, it’ll probably be something like this:
Starting point: you have myApp in production, and you want to deploy a new version of this app — v2.
Deploy v2 under a new application name, for example — myApp-v2
The new app will have its own initial route — myApp-v2.mysite.com
Perform testing and verification on the new app.
Map an additional route to the myApp-v2 application, using the same route as the original application. For example:
cf map-route myApp-v2 mysite.com —hostname myApp
Now requests to your application are load balanced between v1 and v2. Based of the number of instances available to each version, you can perform A/B testing. For example — if you have 4 instances of v1 and 1 instance of v2, 20% of your clients will be routed to the new codebase.
If you identify issues at any point — simply remove v2. No harm done.
Once you are satisfied, scale the number of available instances of v2, and reduce or completely delete the instances of v1.
Remove the myApp-v2.mysite.com route from v2 of your application. You have now fully migrated to the new codebase with zero downtime, including sanity testing phase and potentially A/B testing phase.
Note: The cf cli supports plugin extensions. Some of them provide automated blue-green deployments, such as blue-green-deploy, autopilot and zdd. I personally found blue-green-deploy to be very easy and intuitive, especially due to its support for automated smoke tests as part of the deployment.
kubectl has a built-in support for rolling updates. You basically pass a new docker image for a given deployment, for example:
kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jon/kubernetes-bootcamp:v2
The command above tells kubernetes to perform a rolling update between all pods of the kubernetes-bootcamp deployment from its current image to the new v2 image. During this rollout, your application remains available.
Even more impressive — you can always revert back to the previous version by issuing the undo command:
kubectl rollout undo deployments/kubernetes-bootcamp
External Load Balancing
As we saw previously, both PCF and Kubernetes provide load balancing for your application instances/pods. Once a route or an ingress is added, your application is exposed to the outside world.
If we’ll take an external view of the levels of abstraction that are needed to reach your application, we can describe them as follows:
ingress → service → pod → container
route → container
Internal Load Balancing (Service Discovery)
PCF supports two methods of load balancing inside the cluster:
Route-based load balancing, in the traditional server-side configuration. This is similar to external load balancing mentioned above, however you can specify certain domains to only be accessible from within PCF, thereby making them internal.
Client side load balancing by using Spring Cloud Services. This set of services offers features from the Spring Cloud frameworks that are based on Netflix OSS. For service discovery, Spring Cloud Services uses Netflix Eureka.
Eureka runs as its own service in the PCF environment. Other applications register themselves with Eureka, thereby publishing themselves to the rest of the cluster. Eureka server maintains a heartbeat health check of all registered clients to keep an up-to-date list of healthy instances.
Registered clients can connect to Eureka and ask for available endpoints based on a service id (the value of spring.application.name in case of Spring Boot applications). Eureka would return a list of all available endpoints, and that’s it. It is up to the client to actually access one of these endpoints. That is usually done using frameworks like Ribbon or Feign for client-side load balancing, but that is an implementation detail of the application, and not related to PCF itself.
Client-side load balancing can theoretically scale better since each client keeps a cache of all available endpoints, and can continue working even if Eureka is temporarily down.
If your application already uses Spring Cloud and the Netflix OSS stack, PCF fits your needs like a glove.
Kubernetes uses DNS resolution to identify other services within the cluster. Inside the same namespace, you can lookup another service by its name. In another namespace, you can lookup the service’s name followed by a dot and then the other namespace.
The major benefit that Kubernetes’ load balancing offers is not requiring any special client libraries. Eureka is mostly targeted at java-based applications (although solutions exist for other languages such as Steeltoe for .NET). With Kubernetes, you can make load-balanced http calls to any Kubernetes service that exposes pods, regardless of the implementation of the client or the server. The load-balancing domain name is simply the name of the service that exposes the pods. For example:
You have an application called my-app in namespace zone1
It exposes a GET /myApi REST endpoint
There are 10 pods of this container in the cluster
You created a service called my-service that exposes this application to the cluster
From any other pod inside the namespace, you can call:
From any other pod in any other namespace in the cluster, you can call:
And the API would load-balance over the available instances. It doesn’t matter if your client is written in Java, PHP, Ruby, .NET or any other technology.
2018 update: Pivotal Cloud Foundry now supports polyglot, platform-managed service discovery similar to Kubernetes, using Envoy proxy, apps.internal domain and BOSH DNS.
PCF offers a services marketplace. It provides a ridiculously simple way to bind your application to a service. The term service in PCF is not the same as a service in kubernetes. A PCF service binds your application to things like a database, a monitoring tool, a message broker etc. Some example services are:
Spring Cloud Services, which provides access to Eureka, a Config Server and a Hystrix Dashboard.
RabbitMQ message broker.
Third party vendors can implement their own services as well. Some of the vendor offerings include MongoDB Enterprise and Redislabs for Redis in-memory database. Here’s a screenshot of available services on Pivotal Web Services:
IBM Bluemix is another Cloud Foundry provider that offers its own services such as IBM Watson for AI and machine learning applications.
Every service has different plans available based on your SLA needs, such as a small database for development or a highly-available database for a production environment.
Last but not least, you have the option to define user-provided services. These allow you to bind your application to an existing service that you already have, such as an Oracle database or an Apache Kafka message broker. A user provided service is simply a group of key-value pairs that you can then inject into your application as environment variables. This offloads any specific configuration such as URLs, usernames or passwords to the environment itself — services are bound to a given PCF space.
Kubernetes does not offer a marketplace out of the box. There is a service catalog extension that allows for a similar service catalog, however it is still in beta.
Note that since it can run any docker container — Dockerhub can be considered as a kubernetes marketplace in a way. You can basically run anything that can run in a container.
Kubernetes does have a concept similar to user-provided services. Any configuration or environment variables can exist in ConfigMaps, which allow you to externalise configuration artifacts away your container, thus making it more portable.
Speaking of configuration — One of the features of the Spring Cloud Services service is Spring Cloud Config. It is another service that is targeted specifically for Spring Boot applications. The config service serves configuration artifacts from a git repository of your choosing, and allows for zero-downtime configuration changes. If your Spring beans are annotated with @RefreshScope, they can be reloaded with updated configuration by issuing a POST /refresh API call to your application. The property files that are available as configuration sources are loaded based on a pre-defined loading order, which provides some sort of an inheritance-based mechanism to how the configuration is loaded. It’s a great solution, but again assumes that your applications are based on the Spring Cloud (or .NET Steeltoe) stack. If you’re already using spring boot with a config server today — PCF fits like a glove.
In Kubernetes, you can still run a config server as a container, but that would probably become an unneeded operational overhead since you already have built-in support for ConfigMaps. Use the native solution for the platform you go with.
A big differentiator of Kubernetes is the ability to attach a storage volume to your container. Kubernetes uses etcd as a means to manage storage volumes, and you can attach such a volume to any of your containers. This means you get a reliable storage solution, which lets you run storage-based containers like a database or a file server.
In PCF, your application is fully stateless. PCF follows the 12-factor apps model and one of these models assumes that your application has no state. You should theoretically take the same application that runs today in your on-prem data center, move it to AWS, and provided there is adequate connectivity, it should just work. Any storage-based solution should be offloaded to either a PCF service, or to a storage solution outside the PCF cluster itself. This may be regarded as an advantage or a disadvantage depending on your application and architecture. For stateless application runtimes such as web servers, it is always a good idea to decouple it from any internal storage facility.
Getting started with Kubernetes was not easy. As mentioned above, you can’t just start with a 5-minutes quick start guide, there are just too many things you need to know and too many assumptions about what you already have (docker registry and a git repository are often taken for granted).
Just taking a look at the excellent Kubernetes Basics interactive tutorial shows the level of knowledge required on the platform. For a basic on-boarding, there are 6 steps, each one of them containing quite a few commands and terminologies you need to understand. Trying to follow the tutorial on a local minikube vm instead of the pre-configured online cluster is quite difficult.
Getting started with PCF is easy. Your developers already know how to develop their spring boot / nodejs / php / ruby / .NET application. They already know what its artifacts are. They probably already have some jenkins pipeline in place. They just want to run the same thing in a cloud environment.
If we’ll take a look at PCF’s “Getting Started with Pivotal Cloud Foundry”, it’s almost comical how little is required to get something up and running. When you need more complex interaction, it’s all available for you, either in the cf cli, as part of a manifest.yml, or in the web console, but this doesn’t prevent you from getting started quickly.
If you mainly develop server-based applications in java or nodejs, PCF gets you to the cloud simply, quickly and more elegantly.
Kubernetes is truly a great open source platform. Kudos to Google for giving up control and letting the community do its thing. That’s probably the number one reason why Kubernetes has taken off so quickly while other solutions like Docker swarm are falling behind. Other vendors also offer solutions that offer a more PaaS-like experience on top of Kubernetes, such as RedHat OpenShift.
But with such a diverse and thriving eco-system, the path forward can be one of many different directions. It really does feel like a Google product in a way — maybe it will remain supported by Google for years, maybe it will change with barely any backwards compatibility, or maybe they’ll kill it and move to the next big thing (Does anyone remember Google Buzz, Google Wave or Google Reader?). Any AngularJS developer who’s trying to move to Angular 5 can tell you that backwards compatibility is not a top priority.
Cloud Foundry is also a thriving open source platform, but it is pretty clear who sets the tone here. It is Pivotal, with additional contributions from IBM. Yes, it’s open source, but the enterprise play here is Pivotal Cloud Foundry, which provides added value like the services marketplace, Ops Manager etc. And on that side, it’s a limited democracy. This is a product that is meant to serve enterprise customers, and the feature set would first and foremost answer those needs.
If Kubernetes is Google, then PCF is Apple.
A little more of a walled garden, more controlled, better design/experience layer, and a commitment for delivering a great product. I feel like the platform is more focused, and focus is critical in my line of work.
The real surprise of the recently announced PCF 2.0 was that all I’ve been talking about throughout this article is now just one part of a larger offering. The application runtime (everything that is referred to as PCF in this article) is now called Pivotal Application Service (PAS). There is also a new serverless solution called Pivotal Function Service (PFS), and lastly — a new Kubernetes runtime called Pivotal Container Service (PKS). This means that Pivotal Cloud Foundry now gives you the best of both worlds: A great application runtime for fast onboarding of cloud-native applications, as well as a great container runtime when you need to develop generic low-level containers.
In this article I tried to share my personal experiences of working with both platforms. Although I am a bit biased towards PCF, it is for a good reason — it has served me well. I approached Kubernetes with an open mind, and found it to be a very versatile platform, but also one that requires a steeper learning curve. Maybe I got spoiled by living in the Spring eco-system for too long :). With the latest announcement of PKS, it appears that Pivotal Cloud Foundry is set to offer the best integrated PaaS — one that lets you run cloud native applications as quickly and simply as possible, while also exposing the best generic container runtime when that is needed. I can see this becoming very useful in many scenarios. For example, Apache Kafka is one of the best message brokers available today, but this message broker still doesn’t have a PCF service available, so it has to run externally on virtual machines. Now with PCF 2.0, I can run Apache Kafka in docker containers inside PCF itself.
The main conclusion is that this is definitely not a this or that discussion. Since both the application runtime and the container runtime now live side by side in the same product, the future seems promising for both.
Thank you for reading, and happy coding!
Visit my homepage: http://odedia.org
Follow me on twitter: https://twitter.com/odedia
Follow me on LinkedIn: https://www.linkedin.com/in/odedia/
Follow me on 500px: https://500px.com/odedia
LAS VEGAS -- Don't choose the least expensive of the enterprise cloud storage providers, even if the vendor happens...
to be a big name.
That was the advice from Raj Bala, a Gartner research director focusing on cloud infrastructure, at the Gartner IT Infrastructure, Operations Management & Data Center Conference. Bala cautioned attendees about "horror stories" he has heard from customers that made the cheapest choice, even from "big brand names."
During a post-presentation interview, Bala declined to name any of the enterprise cloud storage providers responsible for the client anecdotes that would "make the hair on your neck stand up." But he cited an example of a customer that used an inexpensive brand-name vendor only to have data unavailable "for days on end."
"And the pricing changed. It went from some really inexpensive price to a much higher price," Bala said. In contrast, he said there is "no way in the world" Amazon Web Services (AWS) would increase prices for its Simple Storage Service (S3). "There would have to be a catastrophic supply chain issue for disk drives -- not even SSDs -- for AWS to have a price increase."
Bala's public recommendation to conference attendees was the following: "Think about the number of years that a vendor has been in this market and their commitment to the market. The last thing you want to do is go with a vendor who says, 'Well, the v1 version of our service didn't work, and we're going to scrap it, and we're going to restart over again.' There are a lot of customers in that boat."
Although Bala's cautionary advice did not note specific enterprise cloud storage providers that abandoned the original versions of their cloud storage services, he did offer frank assessments of each of the major challengers to dominant player AWS.
IBM, Oracle rebuild public cloud storage
Bala told attendees that Oracle and IBM are busy rebuilding their public cloud storage. He said Oracle scrapped its open source OpenStack-based platform, because "that just did not work" and "did not get much traction," so the company decided to start over again.
"Oracle did something very smart. They opened a large office in Seattle, and they've hired a bunch of AWS engineers. So, they've got several hundred AWS engineers that are building v2 of Oracle service," Bala said. "After having failed the first time, they're doing some really thoughtful things the second time."
Bala said IBM's public cloud storage, also based on OpenStack, "didn't really go anywhere" and "had lots of problems," leaving the company "trying to rebuild it." He said IBM spent lots of money trying to buy companies, as well as trying to rebuild in-house.
"But IBM's service is a bit of mess," Bala said. "First, they called it SoftLayer. Then, they called it Bluemix. And now, it's called IBM Cloud. And that's all happened in the span of about 2.5 years. I think IBM Cloud is probably what they're going to keep it at. That seems to make the most sense to me. But they could change it next year, as well. Let's see," Bala said.
Top-ranked enterprise cloud storage providers
The primary challenger to AWS, No. 2 Microsoft, dipped as a visionary in Gartner's 2017 Magic Quadrant for public cloud storage, because "we're seeing very little in terms of new innovation," Bala said. According to Bala, Microsoft has essentially been "following the AWS roadmap" during the past year.
Bala said Google is putting money into its public cloud storage service, but it's not having success with mainstream enterprise because it lacks a sales force "big enough to go knock on doors." He said Gartner clients ask about Google only 3% of the time, based on his review of the entire set of inquiries on infrastructure as a service. Google's cloud storage customer base tends to include sophisticated enterprises, such as hedge funds, Wall Street banks, and biotechnology and pharmaceutical companies, according to Bala.
"Google is an interesting company. Google can put satellites into space. They can build self-driving cars, but building an enterprise strategy is apparently too much for Google," Bala said.
Alibaba dominates the cloud business in China, but has achieved little success with the data centers it has opened in other locations, according to Bala. He said Alibaba essentially replicates AWS offerings, even to the point of identical names for products.
Bala said Dell EMC's Virtustream cloud storage service, based on EMC hardware, lags in Gartner's innovation ratings, because it relies on software that "hasn't been touched in a good three or four years." Rackspace, once the No. 2 enterprise cloud storage provider, faded due to its inability to keep pace with AWS. Rackspace has pivoted to start focusing on services for AWS, Bala said.
Spotty S3 API support
Bala warned conference attendees that compatibility varies greatly among vendors that implement Amazon's S3 API. He said his personal testing showed that some implement the S3 API "dutifully," while other vendors implement only about 40% of the API.
Another area of skepticism for Bala is the increasing number of vendor pitches on multi-cloud strategies. He said the vendors promoting multi-cloud are not the leaders.
"The notion of using multiple clouds is not new, and a multi-cloud architecture is usually problematic. If a customer told me they were doing that, I would throw up big red-flag warnings and say, 'Do not do this unless you absolutely know what you're getting into,'" Bala said.
He said the complexity of an application increases tenfold by spreading it across multiple cloud providers with only marginal benefit. "It doesn't make sense," he said.
Cloud storage price negotiating is another topic on which Bala cautioned clients. He said, because the price of cloud storage falls consistently, it's a mistake to negotiate anything other than a percentage discount. He said some large customers of big providers negotiated a per-gigabyte price only to later find that the market price was less than what they had negotiated.
"If you use a percentage discount, then you're going to get the best price," Bala said. "And most of the vendors -- AWS, for instance -- will start negotiating with you on price if you've got 2 PB of data or more. Microsoft will go a whole lot less. Google will go even further than that."