Martineski

joined 1 year ago
MODERATOR OF
[–] Martineski@lemmy.fmhy.net 2 points 1 year ago (1 children)

You can always run your own private instance if that's an option for you.

[–] Martineski@lemmy.fmhy.net 1 points 1 year ago (1 children)

My instance has federation issues after it went down yesterday so it may be it. If it's not it then it may be related to issue from this post: https://lemm.ee/post/5905754

All images which have federated in from other instances will be deleted from our servers, without any exception At this point, we have millions of such images, and I am planning to just indiscriminately purge all of them. Posts from other instances will not be broken after the deletion, the deleted images will simply be loaded directly from other instances.

More links related to the topic: https://lemm.ee/post/5839513 & https://lemmy.world/post/3995057 & https://lemmy.world/post/4018526

[–] Martineski@lemmy.fmhy.net 0 points 1 year ago* (last edited 1 year ago) (3 children)

Yeah, very probably to prevent hotlinking. I did the post quickly just to test federation issues that lemmy.fmhy.net has and dismissed the image not displaying without thinking. Will fix it in a moment.

Edit: fixed

 

aspidhsaiodioasioashdahsudaios :)

 
 
[–] Martineski@lemmy.fmhy.net 4 points 1 year ago (1 children)

I was an very active poster before lemmy.fmhy.ml went down. I made over 1000 posts and almost 1000 comments in less than 2 months. Now that the instance is back under a new domain I'm going to do the same.

[–] Martineski@lemmy.fmhy.net 4 points 1 year ago (3 children)

Me staying with fmhy and carrying their local feed.

 

I just migrated almost 300 posts from my old sublemmy on lemmy.fmhy.ml. Sorry for inconvenience. I will be now going through the posts where I will be deleting irrelevant posts and editing what's needed. After doing that I will start catching up with news from the last ~2-3 months.

Before migrating I deleted my singularity@lemmy.fmhy.net sub to prevent flooding feeds of other instances. I migrated stuff right after creating singularityai@lemmy.fmhy.net so there should have been zero federation during the process.

I will revive my artwork subs once I'm done with catching up to ai news on this sub which will take many days of work.

 

I discovered this woman, who I call Loab, in April. The AI reproduced her more easily than most celebrities. Her presence is persistent, and she haunts every image she touches.

 

I tried the Sms-Man service and purchased two separate phone numbers for SMS. Even though they list openAI as a site to use their service on, open AI detects a bad carrier with these numbers. Has anyone had better luck with another SMS rental service?

 

In human conversations, individuals can indicate relevant regions within a scene while addressing others. In turn, the other person can then respond by referring to specific regions if necessary. This natural referential ability in dialogue remains absent in current Multimodal Large Language Models (MLLMs). To fill this gap, this paper proposes an MLLM called Shikra, which can handle spatial coordinate inputs and outputs in natural language. Its architecture consists of a vision encoder, an alignment layer, and a LLM. It is designed to be straightforward and simple, without the need for extra vocabularies, position encoder, pre-/post-detection modules, or external plug-in models. All inputs and outputs are in natural language form. Referential dialogue is a superset of various vision-language (VL) tasks. Shikra can naturally handle location-related tasks like REC and PointQA, as well as conventional VL tasks such as Image Captioning and VQA. Experimental results showcase Shikra's promising performance. Furthermore, it enables numerous exciting applications, like providing mentioned objects' coordinates in chains of thoughts and comparing user-pointed regions similarities. Our code, model and dataset are accessed at this https URL.

 

Amazon CEO Andy Jassy called generative A.I. “one of the biggest technical transformations of our lifetimes” in an interview with CNBC on Thursday. He also called many of today’s A.I. chatbots and other generative A.I. tools part of the “hype cycle,” declaring that Amazon was focused on the “substance cycle.”

Amazon’s bona fides in the space are well established, having been a player in artificial intelligence and machine learning long before the ChatGPTs and Bards of the world were publicly released. Former Fortune editor Brian Dumaine wrote a book in 2020 about how Amazon founder Jeff Bezos realized early on that imbuing machine learning into every facet of the company would allow it to gather data to constantly improve itself.

Much as it did with Amazon Web Services, which practically birthed the cloud computing industry that now powers the internet’s biggest companies, including its competitors, Amazon’s A.I. strategy is focused on cementing its position as a major player across the entirety of the A.I. supply chain.

“Every single business unit inside of Amazon is working intensely and very broadly on generative A.I.,” Jassy says.

Jassy shed some light on Amazon’s A.I. game plan, outlining three macro layers: the computing capabilities, the underlying models, and what Jassy refers to as the “application layer,” for example, ChatGPT or Bard.

 

For 3D object manipulation, methods that build an explicit 3D representation perform better than those relying only on camera images. But using explicit 3D representations like voxels comes at large computing cost, adversely affecting scalability. In this work, we propose RVT, a multi-view transformer for 3D manipulation that is both scalable and accurate. Some key features of RVT are an attention mechanism to aggregate information across views and re-rendering of the camera input from virtual views around the robot workspace. In simulations, we find that a single RVT model works well across 18 RLBench tasks with 249 task variations, achieving 26% higher relative success than the existing state-of-the-art method (PerAct). It also trains 36X faster than PerAct for achieving the same performance and achieves 2.3X the inference speed of PerAct. Further, RVT can perform a variety of manipulation tasks in the real world with just a few (∼10) demonstrations per task. Visual results, code, and trained model are provided at this https URL.

 

Covid-19 is said to cause long-term side effects in up to 67% of patients, and these health consequences can include chronic fatigue, loss of taste and smell and brain fog. Increasingly common too is Covid-related hair loss. Known as telogen effluvium, this phenomenon manifests as clumps of hair falling out after brushing or washing your hair.

It’s normal to shed hair daily – we lose about 100-150 hairs each day as hair drops from follicles to make way for new hair growth. This growth cycle occurs because 90% of the hair on our heads is in a growth phase (called anagen), while the remaining 10% is in a resting phase (called telogen). Anagen lasts for about three years before transitioning into the shorter telogen phase, following which hair is shed.

A stressful event like childbirth, certain medications, intense psychological stress and Covid-19 can trigger our bodies to shift a greater-than-normal proportion of growing anagen hairs into a resting telogen state, according to the University of Utah.

“Covid-related hair loss can affect up to 33% of symptomatic patients and 10% of asymptomatic patients,” says a plastic surgeon who deals with hair loss patients. “And this kind of hair loss seems to be different from that induced by stress or disease as cytokines (substances secreted by the body’s immune system) appear to cause direct damage to hair follicles,” she adds.

Covid-induced hair loss has also been reported to start earlier after the stressful event – in two months instead of the usual three.

 

Recent work suggests that interpolating between the weights of two specialized language models can transfer knowledge between tasks in a way that multi-task learning cannot. However, very few have explored interpolation between more than two models, where each has a distinct knowledge base. In this paper, we introduce Derivative Free Weight-space Ensembling (DFWE), a new few-sample task transfer approach for open-domain dialogue. Our framework creates a set of diverse expert language models trained using a predefined set of source tasks. Next, we finetune each of the expert models on the target task, approaching the target task from several distinct knowledge bases. Finally, we linearly interpolate between the model weights using a gradient-free-optimization algorithm, to efficiently find a good interpolation weighting. We demonstrate the effectiveness of the method on FETA-Friends outperforming the standard pretrain-finetune approach.

 

The idea is simple - Specify what you want to research, and the AI will autonomously research it for you in minutes!

▸ One prompt generates an unbiased, factual and in depth research report

▸ Generate research, outlines, resource and lessons reports

▸ Aggregates over 20 web sources per research

▸ Includes an easy to use web interface

▸ Open source: https://github.com/assafelovic/gpt-researcher

▸ Scrapes web sources with javascript support

▸ Keeps track and context of visited and used web sources

[–] Martineski@lemmy.fmhy.net 3 points 1 year ago (1 children)

I'm totally fine with stuff like this being out there. It's just really annoying when authors try to force stuff like this into series that don't revolve around ecchi.

[–] Martineski@lemmy.fmhy.net 3 points 1 year ago

I'm subscribed to both and the names indeed look identical.

[–] Martineski@lemmy.fmhy.net 7 points 1 year ago

It's like that in most anime and even if you find something good there's a huge chance that they will force some scenes like that even if it doesn't fit. It's a real shame because anime doesn't need this trash to attract people, it accomplishes the opposite in fact.

[–] Martineski@lemmy.fmhy.net 3 points 1 year ago

Also you can't stock parts from what I remember. You need to get the consumer first and only then you can order the parts when the official repair stores have their stock ready to make the 3rd part stores less attractive because of the slowness.

[–] Martineski@lemmy.fmhy.net 7 points 1 year ago (1 children)

Watched it, loved it.

view more: next ›